The Privacy Company Does Not Encrypt Your Backups

Tuesday, 21 January 2020

Joseph Menn, for Reuters:

Apple Inc dropped plans to let iPhone users fully encrypt backups of their devices in the company’s iCloud service after the FBI complained that the move would harm investigations, six sources familiar with the matter told Reuters.

Ever since Trump unleashed on Twitter about Apple not helping the government to unlock iPhones and Apple’s response stated that they have actually contributed handing over iCloud backups, something started to seem odd.

Now with this Reuters’ report, can we affirm: That’s the backdoor?


Apple Acquires AI Startup Xnor.ai

Thursday, 16 January 2020

Apple has acquired another Seattle-based AI startup, Xnor.ai. The startup specializes in low-power edge-based tools that allow AI to operate on devices, rather than in the cloud.

This ubiquitous AI can increase privacy of processing, introduce an offline behavior for Siri and help it get smarter.

Nota Terminal

Thursday, 16 January 2020

Nota is a nice terminal calculator with rich notation rendering. It is designed for your quick calculations and therefore provides you with a tiny and beautiful language so you can express your ideas easily. Keep in mind that Nota is all about beauty and ASCII art.

Fun to play around if you enjoy math and ASCII, and don’t bother not using the arrow keys.

Deep Fusion Paid Off Again

Sunday, 12 January 2020

A couple of months ago I took this random picture in my place. I didn’t realize that my Halide configuration was set to use Deep Fusion. Then I took a picture with the normal picture and the result of the comparison was mind blowing.

Look at the lighted wall on both pictures! First one without Deep Fusion and Second one with.

I shared on Twitter in October.

Deep Fusion Off

Deep Fusion Off

Deep Fusion On

Deep Fusion On

The grains in the wall are way more visible in the second picture.

Deep Fusion paid off once again.

I was in Louvre this weekend for the special exhibition on the 500th anniversary of Leonardo da Vinci’s death. This exhibition gathers over one hundred works made by the Renascence artist.

The in pictures below of the Portrait of Leonardo da Vinci in red chalk, attributed to Francesco Melzi, we can see how much detail Deep Fusion was able to extract from the Piece.

Deep Fusion Off

Deep Fusion Off

Deep Fusion On

Deep Fusion On

Deep Fusion Off Cropped

Deep Fusion Off Cropped

Deep Fusion On Cropped

Deep Fusion On Cropped

This is astonishing!

Daughter by Apple

Sunday, 12 January 2020

Once again Apple casts Hollywood to put iPhone 11 Pro to work. This time, Theodore Melfi (director) and Lawrence Sher (director of photography) collaborate to produce a sensitive piece, with Zhuo Xun as the leading actress.

The story is about a mother who has to figure out ways to take care of her child and face unfinished issues with her mother.

Lawrence Sher is one of the big names in the spotlight for this year’s academy awards by his applause-worthy work in Joker. Theodore Melfi was the director of the movie Hidden Figures, nominated for The Best Picture in 2017 Oscars.

The iPhone 11 Pro is a storytelling tool. It makes me feel that anything is possible. So write a story, reach in your pocket, grab your phone and you show us how you see the world.

Here you can see the making-of.


Apple’s Services Getting In The Way

Friday, 10 January 2020

Tim Hardwick synopsized in bullet points Apple’s services year-in-review post for MacRumors. It clearly shows how Apple is trying some alternatives not to depend solely on iPhone sales. The iPhones are selling less for at least these reasons: 1 - Too expensive. 2 - People are keeping their iPhones for longer periods.

App Store customers spent a record $1.42 billion between Christmas Eve and New Year’s Eve, a 16 percent increase over last year, and $386 million on New Year’s Day 2020 alone, a 20 percent increase over last year and a new single-day record.

It seems that Services might be a right bet.

Did We Mention How Safe It Is?

Friday, 10 January 2020

Linking to John Gruber’s link on Daring Fireball, that linked to Apple Recaps Its Year in Services.

App Store might be somehow safe, but the reviewers also allow this type of scam to be in the reach of everyone.

How Well Can Computers Connect Symptoms to Diseases?

Friday, 10 January 2020

Rob Matheson, for MIT News:

The team analyzed how various models used electronic health record (EHR) data, containing medical and treatment histories of patients, to automatically “learn” patterns of disease-symptom correlations. They found that the models performed particularly poorly for diseases that have high percentages of very old or young patients, or high percentages of male or female patients – but that choosing the right data for the right model, and making other modifications, can improve performance.

We are still in the very early stages of medical AI, but we need to start somewhere. This decade has all the indicators to be AI-focused.

Choices in the dataset-creation process impacted the model performance as well. One of the datasets aggregates each of the 140,400 patient histories as one data point each. Another dataset treats each of the 7.4 million annotations as a separate data point. A final one creates “episodes” for each patient, defined as a continuous series of visits without a break of more than 30 days, yielding a total of around 1.4 million episodes.

Computers do things uncountably faster than us. Bringing this amount of data together is remarkable.

I see this data in my mind like The Matrix.

You Can Use SwiftUI Today

Tuesday, 7 January 2020

Gui Rambo:

If you don’t have the habit of creating your own internal tools to make your job easier, I highly recommend it. They don’t have to be perfect, they just have to fulfill your own needs, and you should make them with no expectation that you’re ever going to make them available to anyone else.

That‘s true! SwiftUI is more than ready for personal tools that can help you on a daily basis.

When I was rebuilding this site, I had in mind that I didn’t want to have a computer with me all the time to write a post, so I wanted this process to be as flexible as my iPhone or iPad.

This site is made on top of Jekyll and Jekyll files have a specific header that you can add the information of post, such as title, date, external URL, excerpt, and so on.

This is where SwiftUI came in handy for me. I wrote an app a couple of weeks ago to help me set up this Jekyll header in a blink and save my time.

Normally I use Ulysses to write my posts and Working Copy to publish them. As they are also in iOS, my life got even easier.

This very same post was writen and posted right from my iPhone.

When All Email Protection Tools Fail, AI Comes to the Rescue

Monday, 6 January 2020

Eyal Benishti, for TechTalks:

AI has the capability to go beyond signature detection and dynamically self-learn mailbox and communication habits. Thus, the system can automatically detect any anomalies based on both email data and metadata, leading to improved trust and authentication of email communications.

Another advantage of AI is its ability to scan disparate systems and detect patterns.

Interesting article lists forms how our emails get hacked and AI can keep them free from getting hacked.

Applications can use computer vision to categorize emails and do actions without us even knowing it, like something that filters do today. Not only that, it learns overtime, getting smarter.

Maybe this is gonna be one of the advantages when Singularity is reached.


Making an Indie App This Year

Friday, 3 January 2020

One of my resolutions in 2020 is to release a new app, after a couple of years without making one and with a focus more targeted to the open source community. Now they paths have been swapped.

I don’t have any ideas just yet.

I decided to use an old Apple ID of mine to enroll it in the Apple Developer Program, because my former Developer Apple ID is linked to Brazil and I didn’t feel like changing it to Europe due to a little bit of bureaucracy. However, when I tried to add two-factor authentication to this account, it wouldn’t let me. Apparently I have to wait three days if I change some big thing.

When changes are made security settings on an Apple ID, such as security questions reset or password change, this can delay the Two Factor Authentication process.

What I did was simply resetting the security questions that I didn’t remember. But guess what? To enroll an account in the Developer Program you need to have two-factor authentication enabled for your account. Therefore I cannot push code to a device until then.

These things are confused, just like logging in on iCloud using a browser and getting prompted for the six-digit code right on the computer you are using to log in. Very secure!

Using AI to Improve Breast Cancer Screening

Wednesday, 1 January 2020

Shravya Shetty, M.S. and Daniel Tse, M.D., for Google Health Blog:

These findings show that our AI model spotted breast cancer in de-identified screening mammograms (where identifiable information has been removed) with greater accuracy, fewer false positives, and fewer false negatives than experts. This sets the stage for future applications where the model could potentially support radiologists performing breast cancer screenings.

We also wanted to see if the model could generalize to other healthcare systems. To do this, we trained the model only on the data from the women in the U.K. and then evaluated it on the data set from women in the U.S. In this separate experiment, there was a 3.5 percent reduction in false positives and an 8.1 percent reduction in false negatives, showing the model’s potential to generalize to new clinical settings while still performing at a higher level than experts.

News like these ones are always impressive. The rate of false positives dropped and this is something to celebrate. This is scenario that I always thought of when the “Artificial Intelligence and Humans” topic is discussed.

The Friction of Exporting Live Photos, Portraits and RAW Images From iOS

Wednesday, 1 January 2020

A fresh new day of a brand new year has come, so you might want to check the pictures you took during the celebration of the New Year’s Eve. If you have iCloud Photos enabled then it’s all fine, your data is saved as is, including your Live Photos and Portraits. However, the trouble comes if you want to export your pics right from your iOS device to some third-party storage service like OneDrive, Google Drive and Dropbox.

When you upload an image from your iOS device using the apps of these mentioned services, they convert your Live Photos or RAW images into a JPEG files, removing all the important content of the picture. This is unbelievable!

If you still want to save your pictures in those services and keep the full range of data, you need to use the Mac Photos app, and it requires you to have one Mac available. The biggest advantage to use Photos is to extract the very own content of the media with the option of “Export Unmodified Original File”, so Live Photos would be extracted as a still image and its corresponding video, RAW images would be a RAW file and a JPEG counterpart be generated, and so on. Portrait is different case though, the metadata is not saved in a different media file. But, for a trip, would you bring your Mac along just for that?

The issue here is not the iOS itself, but the laziness or lack of priority of these services to provide this feature in the app.

Programmatically speaking and digging to the Photos API, we can find two classes called PHAsset and PHInternalResources. They are basically the classes that handle medias on iOS.

This first snippet below is a normal image. As you can see it’s a JPEG file with only one resource, so only one image is extracted from this asset.


<PHAsset: 0x105521bb0> 6D5E7526-BF88-4AC5-8570-6A453816F181/L0/001 mediaType=1/16, sourceType=1, (3024x4032), creationDate=2018-11-16 15:09:21 +0000, location=1, hidden=0, favorite=0 
[
	<PHInternalAssetResource: 0x282f64ab0> type=photo size={3024, 4032} fileSize=1683098 uti=public.jpeg filename=IMG_4111.JPG assetLocalIdentifier=6D5E7526-BF88-4AC5-8570-6A453816F181/L0/001
]

This second asset is Live Photo and it contains two resources, an HEIC image and a video. This allows us to extract two files.


<PHAsset: 0x1055216d0> 30584E6D-21BD-43AD-9EE4-0746077EFDCB/L0/001 mediaType=1/8, sourceType=1, (3024x4032), creationDate=2018-11-17 15:53:43 +0000, location=1, hidden=0, favorite=0 
[
	<PHInternalAssetResource: 0x282f6f690> type=photo size={3024, 4032} fileSize=1527095 uti=public.heic filename=IMG_4232.HEIC assetLocalIdentifier=30584E6D-21BD-43AD-9EE4-0746077EFDCB/L0/001,
	<PHInternalAssetResource: 0x282f646c0> type=video_cmpl size={0, 0} fileSize=2450384 uti=com.apple.quicktime-movie filename=IMG_5392.MOV assetLocalIdentifier=30584E6D-21BD-43AD-9EE4-0746077EFDCB/L0/001
]

And this third asset is RAW image. It will generate two files, a CR2 RAW image and a JPEG image.


<PHAsset: 0x105522770> ABCFBBA5-2B4E-4EEE-97E0-1771F5A6B46D/L0/001 mediaType=1/0, sourceType=1, (6000x4000), creationDate=2018-11-17 15:48:29 +0000, location=0, hidden=0, favorite=0
[
	<PHInternalAssetResource: 0x282f7fc30> type=photo size={6000, 4000} fileSize=4449694 uti=public.jpeg filename=IMG_3317.JPG assetLocalIdentifier=ABCFBBA5-2B4E-4EEE-97E0-1771F5A6B46D/L0/001, 
	<PHInternalAssetResource: 0x282f685a0> type=photo_alt size={6000, 4000} fileSize=28951621 uti=com.canon.cr2-raw-image filename=IMG_3317.CR2 assetLocalIdentifier=ABCFBBA5-2B4E-4EEE-97E0-1771F5A6B46D/L0/001
]

All of this made me realize that it’s not our iPhones and iPads that have this limitation, but those companies’ apps are poorly implemented.

Sadly, it does not stop there, the very own camera or editing apps does not support it either. With my favorite camera app Halide, you can only export one RAW File at a time and there’s a whole to-do list to follow; open the image, tap on the RAW tag in the top right corner and then export it. Now imagine hundreds of RAW pictures you took during a trip and you have to go through those steps for every single of them.

The only ones that I have found so far that do the work a little bit better is Adobe’s Lightroom and ProCam. Both of them allow you to export several RAW files in a single share, but they lack support for Live Photo and RAW + JPG exportings.

Apple vs Corellium

Tuesday, 31 December 2019

After I read the article from 9to5Mac, it seems for me that Apple is suing Corellium for obvious reasons and Corellium is trying to rally other people and instigate those people against Apple, pushing the blame away from themselves. Corellium charges money for Apple’s software and hardware, that is profiting with Apple intellectual property and infringing Apple’s explicit copyright statement.

Back in August, Apple opened the lawsuit with the following:

Corellium explicitly markets its product as one that allows the creation of “virtual” Apple devices. For a million dollars a year, Corellium will even deliver a “private” installation of its product to any buyer. There is no basis for Corellium to be selling a product that allows the creation of avowedly perfect replicas of Apple’s devices to anyone willing to pay.

For Corellium’s part, Amanda Gorton, CEO of Corellium, wrote:

Apple’s latest filing against Corellium should give all security researchers, app developers, and jailbreakers reason to be concerned. The filing asserts that because Corellium “allows users to jailbreak” and “gave one or more Persons access… to develop software that can be used to jailbreak,” Corellium is “engaging in trafficking” in violation of the DMCA. In other words, Apple is asserting that anyone who provides a tool that allows other people to jailbreak, and anyone who assists in creating such a tool, is violating the DMCA. Apple underscores this position by calling the unc0ver jailbreak tool “unlawful” and stating that it is “designed to circumvent the same technological measures” as Corellium.

Apple just opened the Security Bounty to everyone. It would be contradictory if they would be shutting doors for security researchers, app developers and jailbreakers just now.

It looks like this fight has been going on for a long time now, and it’s far from being over.

The Decade of Swift

Monday, 30 December 2019

John Sundell wrote a comprehensive article that shows briefly how Swift has evolved from its conception to its current state today and wishes for the future.

However, perhaps the biggest impact that Swift has had on the Apple developer community isn’t so much a technical one — but rather a cultural one. While there was definitely a strong community surrounding Apple’s developer tools long before Swift, there’s no denying that the introduction of Swift drastically transformed and shifted that community’s focus.

It’s astonishing how the language became such a major focus for Apple, and for the community from 2015 on when it became open source. The evolution increased exponentially and everyone benefits from it. It’s not by accident that Apple advertises a lot about learning this programming language, specially kids.

When I started to develop in Swift in 2014, it was on its earliest versions. It was very raw but still I got to publish an app in the store in about one week. Now the language is more mature, of course, and yet keeps its simplicity and robustness.

About AI Ethics-Washing and the Need for Guidelines

Sunday, 29 December 2019

Karen Hao, for MIT Technology Review:

At the beginning of this year, reflecting on these events, I wrote a resolution for the AI community: Stop treating AI like magic, and take responsibility for creating, applying, and regulating it ethically.

But talk is just that–it’s not enough. For all the lip service paid to these issues, many organizations’ AI ethics guidelines remain vague and hard to implement. Few companies can show tangible changes to the way AI products and services get evaluated and approved. We’re falling into a trap of ethics-washing, where genuine action gets replaced by superficial promises.

AI guidelines are undoubtely necessary for a world where artificial intelligence is used to help us thrive.


The Paradox of The Times: Site’s Ads and Trackers vs The Privacy Project

Saturday, 28 December 2019

This is a follow-up of my previous article New York Times Special: ‘One Nation, TRACKED’.

There’s a disclaimer in the bottom of each article:

Like other media companies, The Times collects data on its visitors when they read stories like this one. For more detail please see our privacy policy and our publisher’s description of The Times’s practices and continued steps to increase transparency and protections.

As shown on this Pinboard’s tweet, the very NYT tracks their readers without ceremony. They try to minimize the guilt with the statement of “we do it like other media companies, so don’t blame us!”.

This anodyne footer, which gives no hint of the level of third-party tracking imposed on readers, stays the same even on Privacy Project articles that cover companies with a direct financial relationship to the New York Times. Or Privacy Project op-eds by those companies’ CEOs.

It’s unclear if those tracking systems use location, but it can somehow be tracked by IP. Google Analytics, which is one of the services they use, provides a “soft” tracking option that makes it difficult to know a precise location, but many others don’t.

The tracking scripts are everywhere, including in The Privacy Project, which is undermined by such behavior.

New York Times Special: 'One Nation, TRACKED'

Saturday, 28 December 2019

This is a remarkable seven part story led by Stuart A. Thompson and Charlie Warzel for The New York Times, which talks about what happens if we don’t consider how and which apps we are sharing our location with using our smartphones.

Stuart A. Thompson and Charlie Warzel:

The Times Privacy Project obtained one such file, by far the largest and most sensitive ever to be reviewed by journalists. It holds more than 50 billion location pings from the phones of more than 12 million Americans as they moved through several major cities, including Washington, New York, San Francisco and Los Angeles.

This first article also adds that the information of the precise location of these millions of phones dates back a period of several months in 2016 and 2017. It’s unlikely that things have changed ever since.

They can see the places you go every moment of the day, whom you meet with or spend the night with, where you pray, whether you visit a methadone clinic, a psychiatrist’s office or a massage parlor.

This is very scary, isn’t it? The fact you are constantly being monitored by caring a piece of tech in your pocket. Even scarier is not knowing whom your locations is being sent to.

EVERY MINUTE OF EVERY DAY, everywhere on the planet, dozens of companies — largely unregulated, little scrutinized — are logging the movements of tens of millions of people with mobile phones and storing the information in gigantic data files.

Things get even more frightening when apps share the data with third-party apps. In this case, our data can go down to a rabbit hole and these invisible companies can have a share of and benefit from it, meaning money. This is a really must-read piece, you can read it fully here.

One last addition to the topic, John Gruber commenting on Daring Fireball about The Times’s article:

My honest questions: What do we do about it?

Legislation? Make the collection of this sort of data highly-regulated? Is that even feasible with an internet that spans the globe?

Technical? Is there something Apple and Google can do? Should we all be using trusted VPNs all the time to obscure our location? Should Apple build its own VPN and include it with iCloud?

What apps are generating this data? Why don’t we have a list of apps to avoid if you don’t want your location tracked?

Those are genuine questions and I kinda have a feeling of frustration and powerlessness in the face of such news.

In Europe, GDPR was created to solve this type of problem, among others. Article 17 Right to erasure (‘right to be forgotten’) gives the user the right to have personal data blotted out. So if a user requires a company to delete their personal data, the company is required by law to do so.

Winter Blues, According to Science

Friday, 27 December 2019

Laurie Clarke, on Wired:

For the small subsection of the population who experience full-blown seasonal affective disorder (SAD), it’s even worse – winter blues mutate into something far more debilitating. Sufferers experience hypersomnia, low mood and a pervasive sense of futility during the bleaker months. SAD notwithstanding, depression is more widely reported during winter, suicide rates increase, and productivity in the workplace drops during January and February.

A study looking at three pre-industrial societies – that is those without alarm clocks, smartphones and nine-to-five working hours – in South America and Africa found that these communities collectively snoozed for about an hour longer during winter. Given these communities are located in equatorial regions, this effect could be even more pronounced in the northern hemisphere where winters are colder and darker.

Before moving to Europe, I lived a good part of my life very close to the Equator line, so I had the vague idea how this could affect people’s minds. Now I see it in practice. Winter shakes the whole thing out of us and requires us to act on some matters: from using house lamps that simulate sunlight to taking vitamin D and melatonin pills.

Instagram Hiding Likes and the Possible Consequences

Thursday, 26 December 2019

Kerry Goyette, for Business Insider:

When we’re getting hearts or likes or shares, it triggers the reward center in the brain. When we feel like we’re not getting approval – even if it’s from something we believe to be trivial like “not enough” likes – it can trigger emotional pain and fear of being left out. Why? Because the brain is always assessing belonging.

Research shows us that likes on social media do provide a boost, but only in the short term. Your dopamine goes up, but pretty soon you’ll need another boost. In other words, likes can feel nice and curb the hunger, yet they won’t feed the soul.

It is crazy to think how a “simple” feature can affect us so much. I wonder if the original developers of Instagram had thought of it upfront.

Memory Foam Layer for AirPods Pro

Wednesday, 25 December 2019

Federico Viticci:

I’ve modded the AirPods Pro’s silicone tips with an extra memory foam layer, which helps the tips fit better in my ears, resulting in a warmer sound and overall more comfortable feel. The best part: I didn’t have to cut the memory foam layer myself – I simply took the foam layer from a pair of Symbio W eartips and fitted it inside the AirPods Pro’s tips

Thanks to the extra foam layer, the modded tips “fill” my ears, creating a better seal and ensuring the AirPods stay in place.

As mentioned in this post, I had issues with AirPods Pro, they wouldn’t stay put on my ears. As soon as I started to talk or walk, they would gradually come out of my ears and the ear tips didn’t help to solve the problem either. But maybe this silicon sealing might be a solution.

I had the opposite experience of many users that used the 1st and 2nd generations headphones; AirPods Pro fit well on them, not previous models.

It’s like fixing a bug and introducing a new one.

AirPods Pro Bluetooth Latency

Monday, 23 December 2019

Sephen Coyle:

They drop from 274ms to 178ms going from the first to second generation, and the AirPods Pro take it down even further, to 144ms. While a 130ms reduction may not seem like a lot, the perceptual difference from this makes the AirPods Pro tantalisingly close to seamless.

Keyboard clicks are near enough to their corresponding keypresses that they feel like they’re actually related to them, not just the cacophony of blips they had seemed before.

This is really interesting and noteworthy. I wish the tests were ran also with Powerbeats Pro, which are currently my go-to headphones, after returning the AirPods Pro for they kept falling from my ears, issue that I didn’t have with the first and second generations.

Pixelmator ML Super Resolution

Monday, 23 December 2019

Pixelmator Team always tries to extract the most out of iOS devices and macOS computers. The Pixelmator Photo, their app for iPads, is great and already bundles a lot of machine leaning in it. Now their latest ML-powered feature is called ML Super Resolution and it’s implemented on their macOS app, the Pixelmator Pro.

The app already provide three algorithms to scale an image: Bilinear, Lanczos and Nearest Neighbor; but they were using traditional mathematics calculations to predict pixels around. The ML Super Resolution comes to push scaling a little bit farther, analyzing the content of the image instead.

Until now, if an image was too small to be used at its original resolution, either on the web or in print, there was no way to scale it up without introducing visible image defects like pixelation, blurriness, or ringing artifacts. Now, with ML Super Resolution, scaling up an image to three times its original resolution is no problem at all.

It also requires more power from the computer and it was only possible to reasonably make it available to the end user in the last couple of years.

Naturally, the machine learning way requires a lot more processing power than the primitive approaches — between 8 to 62 thousand times more, in fact.

Making this available in an app like Pixelmator Pro has only become possible in the last couple of years — even on Mac computers from 5 or so years ago, ML Super Resolution can take minutes to process a single image due to slower performance and less available memory. On the latest hardware, however, images are processing in a few seconds, and even faster on iMac Pro, Mac Pro, or any Mac with multiple GPUs thanks to our use of Core ML 3 and its multi-GPU support. For the same reasons, the performance of ML Super Resolution is also significantly improved when using an eGPU.

If some details are somehow introduced with the optimized versions of the original image, it doesn’t matter as long as the final result is good enough and you can’t trace it back to the original file.

Apple Security Bounty Is Open to Everyone

Monday, 23 December 2019

The Bounty program was previously an invitation-only club as explained in details on this 9to5mac’s article, now it’s open for anyone who finds potential catastrophic bugs.

There are some certain rules though, as described on the Eligibility section:

In order to be eligible for an Apple Security Bounty, the issue must occur on the latest publicly available versions of iOS, iPadOS, macOS, tvOS, or watchOS with a standard configuration and, where relevant, on the latest publicly available hardware. These eligibility rules are meant to protect customers until an update is available, ensure Apple can quickly verify reports and create necessary updates, and properly reward those doing original research.

  • Be the first party to report the issue to Apple Product Security.
  • Provide a clear report, which includes a working exploit (detailed below).
  • Not disclose the issue publicly before Apple releases the security advisory for the report. (Generally, the advisory is released along with the associated update to resolve the issue).

The payouts varies from Unauthorized access to iCloud account data on Apple Servers with a $100,000 tag to a Zero-click kernel code execution with persistence and kernel PAC bypass tagging $1,000,000. Issues found in beta releases can spike up the payment with a bonus of 50%.

A great add-on is that Apple will match the bounty payment to donations to charities.

Amazon, Apple, Google and ZigBee Alliance Joining Forces to Create a New Standard

Sunday, 22 December 2019

IoT devices have been around for quite some time now and there’s no ruler protocol to put all of them together1, and regarding security, which is one of the most concerning topics, it’s not so rare to hear of breaches.

Seattle and Cupertino, Mountain View and Davis, California – Amazon, Apple, Google, and the Zigbee Alliance today announced a new working group that plans to develop and promote the adoption of a new, royalty-free connectivity standard to increase compatibility among smart home products, with security as a fundamental design tenet. Zigbee Alliance board member companies are also onboard to join the working group and contribute to the project.

Thinking about the companies’ strengths: Amazon has the largest catalogue of IoT products, Apple has the best user experience, while Google masters indexing and ZigBee knows its way into specifications and standards.

These four companies are joining forces to try to unify this whole mess of a very fragmented market and maybe provide the best of these worlds to the final user. Because it’s not enough to have what we currently have and just have them talk to Alexa, Siri and Assistant.

The project Connected Home over IP will be open source and may address challenges like Security, Authentication, Updates, Privacy2; as described on this post from IBM.

The goal of the Connected Home over IP project is to simplify development for manufacturers and increase compatibility for consumers. The project is built around a shared belief that smart home devices should be secure, reliable, and seamless to use. By building upon Internet Protocol (IP), the project aims to enable communication across smart home devices, mobile apps, and cloud services and to define a specific set of IP-based networking technologies for device certification.

  1. Think about having a lot of companies implementing 4G and your 4G SIM card only work with your phone because it was made by a company X.
  2. Including GDPR compliance for devices in Europe; specially because this standard can also provide a structured way to collect data that can be used for advertising.