Apple Purchases Machine Learning Startup Laserlike

Apple last year acquired Laserlike, a machine learning startup located in Silicon Valley, reports The Information. Apple's purchase of the four-year-old company was confirmed by an Apple spokesperson with a standard acquisition statement: "Apple buys smaller technology companies from time to time and we generally do not discuss our purpose or plans."

Laserlike's website says that its core mission is to deliver "high quality information and diverse perspectives on any topic from the entire web to you."


The company built a search app that used discovery and personalization machine learning techniques to build a Laserlike app described as an "interest search engine" that provided news, web, video, and local content relevant to each user. The Laserlike app is no longer available following the acquisition, but the company's website continues to cover what it was focused on:
We live in a world of information abundance, where the main problem is sifting through the noise and discovering the stuff you actually care about. For instance, if you care about knowing when the next SpaceX livestream launch is because you like to watch it with your kids, or if the car you bought two years ago has had a recall, or if a company you're interested in announces it's opening a new office where you live, or if there's a music festival coming to your town, you don't know when to look for these things, and there's no product that informs you automatically.

This is one of the things we want to fix on the Internet. Laserlike's core mission is to deliver high quality information and diverse perspectives on any topic from the entire web. We are passionate about helping people follow their interests and engage with new perspectives.
The Information suggests that Apple will use the Laserlike acquisition to strengthen its artificial intelligence efforts, including Siri. The Laserlike team has joined the Apple AI group led by new Apple AI chief John Giannandrea, who came to Apple from Google last year.

Giannandrea has been tasked with improving Apple's machine learning initiatives and bolstering Siri, the company's voice assistant. Laserlike's technology could potentially allow Siri to learn more about Apple users to provide more tailored, personalized content.


This article, "Apple Purchases Machine Learning Startup Laserlike" first appeared on MacRumors.com

Discuss this article in our forums

Apple AI Chief John Giannandrea Gets Promotion to Senior Vice President

Apple today announced John Giannandrea, who handles machine learning and AI for the company, has been promoted to the Apple's executive team and is now listed on the Apple Leadership page as a senior vice president.

Giannandrea joined Apple as its chief of machine learning and AI strategy in April 2018, stealing him away from Google where he ran Google's search and artificial intelligence unit.


At the time, Apple said Giannandrea would lead the company's AI and machine learning teams, reporting directly to Apple CEO Tim Cook. Giannandrea took over leadership of Siri and combined Apple's Siri and Core ML teams.

According to Apple's press release announcing the promotion, Giannandrea's team has focused on advancing and tightly integrating machine learning into Apple products, leading to more personal, intelligent, and natural interactions for customers while also protecting user privacy.

Apple CEO Tim Cook says that the company is "fortunate" to have Giannandrea at the helm of its AI and machine learning efforts.
"John hit the ground running at Apple and we are thrilled to have him as part of our executive team," said Tim Cook, Apple's CEO. "Machine learning and AI are important to Apple's future as they are fundamentally changing the way people interact with technology, and already helping our customers live better lives. We're fortunate to have John, a leader in the AI industry, driving our efforts in this critical area."
Prior to joining Apple, Giannandrea spent eight years at Google, and in the time before that, he founded two companies, Tellme Networks and Metaweb Technologies.

Giannandrea's April hiring came amid ongoing criticism of Siri, which many have criticized for its shortcomings in comparison to AI offerings from companies like Microsoft, Amazon, and Google. Apple made serious strides improving Siri in 2018, building out the capabilities of the AI assistant with features like Siri Shortcuts in iOS 12.


Discuss this article in our forums

Apple Details How HomePod Can Detect ‘Hey Siri’ From Across a Room, Even With Loud Music Playing

In a new entry in its Machine Learning Journal, Apple has detailed how Siri on the HomePod is designed to work in challenging usage scenarios, such as during loud music playback, when the user is far away from the HomePod, or when there are other active sound sources in a room, such as a TV or household appliances.


An overview of the task:
The typical audio environment for HomePod has many challenges — echo, reverberation, and noise. Unlike Siri on iPhone, which operates close to the user’s mouth, Siri on HomePod must work well in a far-field setting. Users want to invoke Siri from many locations, like the couch or the kitchen, without regard to where HomePod sits. A complete online system, which addresses all of the environmental issues that HomePod can experience, requires a tight integration of various multichannel signal processing technologies.
To accomplish this, Apple says its audio software engineering and Siri speech teams developed a multichannel signal processing system for the HomePod that uses machine learning algorithms to remove echo and background noise and to separate simultaneous sound sources to eliminate interfering speech.

Apple says the system uses the HomePod's six microphones and is powered continuously by its Apple A8 chip, including when the HomePod is run in its lowest power state to save energy. The multichannel filtering constantly adapts to changing noise conditions and moving talkers, according to the journal entry.

Apple goes on to provide a very technical overview of how the HomePod mitigates echo, reverberation, and noise, which we've put into layman's terms:
  • Echo Cancellation: Since the speakers are close to the microphones on the HomePod, music playback can be significantly louder than a user's "Hey Siri" voice command at the microphone positions, especially when the user is far away from the HomePod. To combat the resulting echo, Siri on HomePod implements a multichannel echo cancellation algorithm.
  • Reverberation Removal: As the user saying "Hey Siri" moves further away from the HomePod, multiple reflections from the room create reverberation tails that decrease the quality and intelligibility of the voice command. To combat this, Siri on the HomePod continuously monitors the room characteristics and removes the late reverberation while preserving the direct and early reflection components in the microphone signals.
  • Noise Reduction: Far-field speech is typically contaminated by noise from home appliances, HVAC systems, outdoor sounds entering through windows, and so forth. To combat this, the HomePod uses state-of-the-art speech enhancement methods that create a fixed filter for every utterance.

Apple says it tested the HomePod's multichannel signal processing system in several acoustic conditions, including music and podcast playback at different levels, continuous background noise such as conversation and rain, noises from household appliances such as a vacuum cleaner, hairdryer, and microwave.

During its testing, Apple varied the locations of the HomePod and its test subjects to cover different use cases. For example, in living room or kitchen environments, the HomePod was placed against the wall and in the middle of the room.

Apple's article concludes with a summary of Siri performance metrics on the HomePod, with graphs showing that Apple's multichannel signal processing system led to improved accuracy and fewer errors. Those interested in learning more can read the full entry on Apple's Machine Learning Journal.

Related Roundup: HomePod
Buyer's Guide: HomePod (Neutral)

Discuss this article in our forums

Apple to Attend World’s Largest Machine Learning Conference Next Week

Apple today announced that it will be attending the 2018 Conference on Neural Information Processing Systems, aka NeurIPS, in Montréal, Canada from December 2 through December 8. Apple will have a booth staffed with machine learning experts and invites any conference attendees to drop by and chat.


NeurIPS is in its 32nd year and is said to be the world's largest and most influential machine learning and artificial intelligence conference. Apple is likely there to showcase its machine learning technologies and recruit new employees.

Machine learning algorithms play a role in virtually every Apple product and service, ranging from Apple Maps and Apple News to Siri and the QuickType keyboard on iPhone and iPad. Apple has machine learning jobs available in areas such as artificial intelligence, computer vision, data science, and deep learning.

Apple highlights its machine learning efforts in its Machine Learning Journal.


Discuss this article in our forums

Apple Details Improvements to Siri’s Ability to Recognize Names of Local Businesses and Destinations

In a new entry in its Machine Learning Journal, Apple has detailed how it approached the challenge of improving Siri's ability to recognize names of local points of interest, such as small businesses and restaurants.


In short, Apple says it has built customized language models that incorporate knowledge of the user's geolocation, known as Geo-LMs, improving the accuracy of Siri's automatic speech recognition system. These models enable Siri to better estimate the user's intended sequence of words.

Apple says it built one Geo-LM for each of the 169 Combined Statistical Areas in the United States, as defined by the U.S. Census Bureau, which encompass 80 percent of the country's population. Apple also built a single global Geo-LM to cover all areas not defined by CSAs around the world.

When a user queries Siri, the system is customized with a Geo-LM based on the user's current location. If the user is outside of a CSA, or if Siri doesn't have access to Location Services, the system defaults to the global Geo-LM.

Apple's journal entry is highly technical, and quite exhaustive, but hopefully this means that Siri should be able to better understand the names of local points of interest, and also be able to better distinguish between a Tom's Restaurant in Iowa and Kansas based on a user's geolocation.

In its testing, Apple found that the customized language models reduced Siri's error rate by between 41.9 and 48.4 percent in eight major U.S. metropolitan regions: Boston, Chicago, Los Angeles, Minneapolis, New York, Philadelphia, Seattle, and San Francisco, excluding mega-chains like Walmart.

Siri still trails Google Assistant in overall accuracy, according to a recent study by research firm Loup Ventures, but hopefully these improvements eliminate some of the frustration of querying Siri about obscurely named places.


Discuss this article in our forums

Apple Updates Leadership Page to Include New AI Chief John Giannandrea

Apple today updated its Apple Leadership page to include John Giannandrea, who now serves as Apple's Chief of Machine Learning and AI Strategy.

Apple hired Giannandrea back in April, stealing him away from Google where he ran the search and artificial intelligence unit.


Giannandrea is leading Apple's AI and machine learning teams, reporting directly to Apple CEO Tim Cook. He has taken over leadership of Siri, which was previously overseen by software engineering chief Craig Federighi.

Apple told TechCrunch that it is combining its Core ML and Siri teams under Giannandrea. The structure of the two teams will remain intact, but both will now answer to Giannandrea.

Under his leadership, Apple will continue to build its AI/ML teams, says TechCrunch, focusing on general computation in the cloud alongside data-sensitive on-device computations.

Giannandrea spent eight years at Google before joining Apple, and before that, he founded Tellme Networks and Metaweb Technologies.

Apple's hiring of Giannandrea in April came amid ongoing criticism of Siri, which many have claimed has serious shortcomings in comparison to AI offerings from companies like Microsoft, Amazon, and Google due to Apple's focus on privacy.

Subscribe to the MacRumors YouTube channel for more videos.

In 2018, Apple is improving Siri through a new Siri Shortcuts feature that's coming in iOS 12, which is designed to let users create multi-step tasks using both first and third-party apps that can be activated through Siri.


Discuss this article in our forums

Apple’s Latest Machine Learning Journal Entry Focuses on ‘Hey Siri’ Trigger Phrase

Apple's latest entry in its online Machine Learning Journal focuses on the personalization process that users partake in when activating "Hey Siri" features on iOS devices. Across all Apple products, "Hey Siri" invokes the company's AI assistant, and can be followed up by questions like "How is the weather?" or "Message Dad I'm on my way."

"Hey Siri" was introduced in iOS 8 on the iPhone 6, and at that time it could only be used while the iPhone was charging. Afterwards, the trigger phrase could be used at all times thanks to a low-power and always-on processor that fueled the iPhone and iPad's ability to continuously listen for "Hey Siri."


In the new Machine Learning Journal entry, Apple's Siri team breaks down its technical approach to the development of a "speaker recognition system." The team created deep neural networks and "set the stage for improvements" in future iterations of Siri, all motivated by the goal of creating "on-device personalization" for users.

Apple's team says that "Hey Siri" as a phrase was chosen because of its "natural" phrasing, and described three scenarios where unintended activations prove troubling for "Hey Siri" functionality. These include "when the primary users says a similar phrase," "when other users say "Hey Siri"," and "when other users say a similar phrase." According to the team, the last scenario is "the most annoying false activation of all."

To lessen these accidental activations of Siri, Apple leverages techniques from the field of speaker recognition. Importantly, the Siri team says that it is focused on "who is speaking" and less on "what was spoken."
The overall goal of speaker recognition (SR) is to ascertain the identity of a person using his or her voice. We are interested in “who is speaking,” as opposed to the problem of speech recognition, which aims to ascertain “what was spoken.” SR performed using a phrase known a priori, such as “Hey Siri,” is often referred to as text-dependent SR; otherwise, the problem is known as text-independent SR.
The journal entry then goes into how users enroll in a personalized "Hey Siri" process using explicit and implicit enrollment. Explicit begins the minute that users speak the trigger phrase a few times, but implicit is "created over a period of time" and made during "real-world situations."

The Siri team says that the remaining challenges faced by speaker recognition is figuring out how to get quality performance in reverberant (large room) and noisy (car) environments. You can check out the full Machine Learning Journal entry on "Hey Siri" right here.

Since it began last summer, Apple has shared numerous entries in its Machine Learning Journal about complex topics, which have already included "Hey Siri", face detection, and more. All past entries can be seen on Apple.com.


Discuss this article in our forums

Deep Neural Networks for Face Detection Explained on Apple’s Machine Learning Journal

Apple today published a new entry in its online Machine Learning Journal, this time covering an on-device deep neural network for face detection, aka the technology that's used to power the facial recognition feature used in Photos and other apps.

Facial detection features were first introduced as part of iOS 10 in the Core Image framework, and it was used on-device to detect faces in photos so people could view their images by person in the Photos app.


Implementing this technology was no small feat, says Apple, as it required "orders of magnitude more memory, much more disk storage, and more computational resources."
Apple's iCloud Photo Library is a cloud-based solution for photo and video storage. However, due to Apple's strong commitment to user privacy, we couldn't use iCloud servers for computer vision computations. Every photo and video sent to iCloud Photo Library is encrypted on the device before it is sent to cloud storage, and can only be decrypted by devices that are registered with the iCloud account. Therefore, to bring deep learning based computer vision solutions to our customers, we had to address directly the challenges of getting deep learning algorithms running on iPhone.
Apple's Machine Learning Journal entry describes how Apple overcame these challenges by leveraging GPU and CPU in iOS devices, developing memory optimizations for network interference, image loading, and caching, and implementing the network in a way that did not interfere with other tasks expected on iPhone.

The new entry is well worth reading if you're interested in the specific details behind how Apple overcame these challenges to successfully implemented the feature. The technical details are dense, but understandable, and it provides some interesting insight into how facial recognition works.

With its Machine Learning Journal, Apple aims to share the complex concepts behind its technology so the users of its products can get a look behind the curtain. It also serves as a way for Apple's engineers to participate in the AI community.

Apple has previously shared several articles on Siri, including how "Hey Siri," works, and a piece on using machine learning and neural networks for refining synthetic images.


Discuss this article in our forums

Apple Says ‘Hey Siri’ Detection Briefly Becomes Extra Sensitive If Your First Try Doesn’t Work

A new entry in Apple's Machine Learning Journal provides a closer look at how hardware, software, and internet services work together to power the hands-free "Hey Siri" feature on the latest iPhone and iPad Pro models.


Specifically, a very small speech recognizer built into the embedded motion coprocessor runs all the time and listens for "Hey Siri." When just those two words are detected, Siri parses any subsequent speech as a command or query.

The detector uses a Deep Neural Network to convert the acoustic pattern of a user's voice into a probability distribution. It then uses a temporal integration process to compute a confidence score that the phrase uttered was "Hey Siri."

If the score is high enough, Siri wakes up and proceeds to complete the command or answer the query automatically.

If the score exceeds Apple's lower threshold but not the upper threshold, however, the device enters a more sensitive state for a few seconds, so that Siri is much more likely to be invoked if the user repeats the phrase—even without more effort.

"This second-chance mechanism improves the usability of the system significantly, without increasing the false alarm rate too much because it is only in this extra-sensitive state for a short time," said Apple.

To reduce false triggers from strangers, Apple invites users to complete a short enrollment session in which they say five phrases that each begin with "Hey Siri." The examples are saved on the device.
We compare the distances to the reference patterns created during enrollment with another threshold to decide whether the sound that triggered the detector is likely to be "Hey Siri" spoken by the enrolled user.

This process not only reduces the probability that "Hey Siri" spoken by another person will trigger the iPhone, but also reduces the rate at which other, similar-sounding phrases trigger Siri.
Apple also says it created "Hey Siri" recordings both close and far in various environments, such as in the kitchen, car, bedroom, and restaurant, based on native speakers of many languages around the world.

For many more technical details about how "Hey Siri" works, be sure to read Apple's full article on its Machine Learning Journal.


Discuss this article in our forums