Behind the Motion Photos Technology in Pixel 2

Posted by Matthias Grundmann, Research Scientist and Jianing Wei, Software Engineer, Google Research

One of the most compelling things about smartphones today is the ability to capture a moment on the fly. With motion photos, a new camera feature available on the Pixel 2 and Pixel 2 XL phones, you no longer have to choose between a photo and a video so every photo you take captures more of the moment. When you take a photo with motion enabled, your phone also records and trims up to 3 seconds of video. Using advanced stabilization built upon technology we pioneered in Motion Stills for Android, these pictures come to life in Google Photos. Let’s take a look behind the technology that makes this possible!

Motion photos on the Pixel 2 in Google Photos. With the camera frozen in place the focus is put directly on the subjects. For more examples, check out this Google Photos album.

Camera Motion Estimation by Combining Hardware and Software
The image and video pair that is captured every time you hit the shutter button is a full resolution JPEG with an embedded 3 second video clip. On the Pixel 2, the video portion also contains motion metadata that is derived from the gyroscope and optical image stabilization (OIS) sensors to aid the trimming and stabilization of the motion photo. By combining software based visual tracking with the motion metadata from the hardware sensors, we built a new hybrid motion estimation for motion photos on the Pixel 2.

Our approach aligns the background more precisely than the technique used in Motion Stills or the purely hardware sensor based approach. Based on Fused Video Stabilization technology, it reduces the artifacts from the visual analysis due to a complex scene with many depth layers or when a foreground object occupies a large portion of the field of view. It also improves the hardware sensor based approach by refining the motion estimation to be more accurate, especially at close distances.

Motion photo as captured (left) and after freezing the camera by combining hardware and software For more comparisons, check out this Google Photos album.

The purely software-based technique we introduced in Motion Stills uses the visual data from the video frames, detecting and tracking features over consecutive frames yielding motion vectors. It then classifies the motion vectors into foreground and background using motion models such as an affine transformation or a homography. However, this classification is not perfect and can be misled, e.g. by a complex scene or dominant foreground.

Feature classification into background (green) and foreground (orange) by using the motion metadata from the hardware sensors of the Pixel 2. Notice how the new approach not only labels the skateboarder accurately as foreground but also the half-pipe that is at roughly the same depth.

For motion photos on Pixel 2 we improved this classification by using the motion metadata derived from the gyroscope and the OIS. This accurately captures the camera motion with respect to the scene at infinity, which one can think of as the background in the distance. However, for pictures taken at closer range, parallax is introduced for scene elements at different depth layers, which is not accounted for by the gyroscope and OIS. Specifically, we mark motion vectors that deviate too much from the motion metadata as foreground. This results in a significantly more accurate classification of foreground and background, which also enables us to use a more complex motion model known as mixture homographies that can account for rolling shutter and undo the distortions it causes.

Background motion estimation in motion photos. By using the motion metadata from Gyro and OIS we are able to accurately classify features from the visual analysis into foreground and background.

Motion Photo Stabilization and Playback
Once we have accurately estimated the background motion for the video, we determine an optimally stable camera path to align the background using linear programming techniques outlined in our earlier posts. Further, we automatically trim the video to remove any accidental motion caused by putting the phone away. All of this processing happens on your phone and produces a small amount of metadata per frame that is used to render the stabilized video in real-time using a GPU shader when you tap the Motion button in Google Photos. In addition, we play the video starting at the exact timestamp as the HDR+ photo, producing a seamless transition from still image to video.

Motion photos stabilize even complex scenes with large foreground motions.

Motion Photo Sharing
Using Google Photos, you can share motion photos with your friends and as videos and GIFs, watch them on the web, or view them on any phone. This is another example of combining hardware, software and machine learning to create new features for Pixel 2.

Motion photos is a result of a collaboration across several Google Research teams, Google Pixel and Google Photos. We especially want to acknowledge the work of Karthik Raveendran, Suril Shah, Marius Renn, Alex Hong, Radford Juang, Fares Alhassen, Emily Chang, Isaac Reynolds, and Dave Loxton.

Continua a leggere

Pubblicato in Senza categoria

IN FLAMES X NEWSOUL The Water Margin Series 1/6th scale “Skywalker Wu Song” 12-inch Collectible Figure Deluxe Version

Wu Song, nicknamed “Pilgrim”, is a fictional character in Water Margin, one of the Four Great Classical Novels of Chinese literature. According to legend, Wu Song was a student of the archer Zhou Tong and he specialised in Chuojiao, Ditangquan, and the use of the staff. The novel describes him as a good-looking man with shining eyes, thick eyebrows, a muscular body and an impressive bearing. His parents died early, and he was raised by his elder brother, Wu Dalang (武大郎; literally “Eldest Brother Wu”).

IN FLAMES X NEWSOUL (Product code: IFT-030) The Water Margin Series 1/6th scale “Skywalker Wu Song” Collectible Figure Deluxe Version features: An elaborate carving head (with buddhist monk’s head band & Long black real fabric hair implantation), Approximately 33cm tall Newly developed muscular body with seamless upper body, 9 pieces of interchangeable palms

Scroll down to see the rest of the pictures.
Click on them for bigger and better views.

Costumes: buddhist monk outfit – White underwear, Brown robe, Dark blue shoulder coat, skull Beads, Dark blue wristbands, Red belt, Narrow blue waist support, Black bloomers, Black officer boots, A suit of battle-damaged lower hem(be used when the figure is stripped to the waist), White lower hem(covered with mud and blood), Brown lower hem(covered with mud and blood), Black belt, Wide blue waist support

Weapon: Double buddhist monk’s knife

Accessories: Square stage with transparent pillar, Bamboo hat, A wine bottle gourd
Exclusive to Deluxe Version: wine bowl (including spill-out effect), Right hand for holding the wine bowl, Wine jars (including pouring effect, class chair, black flag, flag base

Release date: Approximately Q4 2018 — Q1 2019

Continua a leggere

Pubblicato in Senza categoria

Semantic Image Segmentation with DeepLab in TensorFlow

Posted by Liang-Chieh Chen and Yukun Zhu, Software Engineers, Google Research

Semantic image segmentation, the task of assigning a semantic label, such as “road”, “sky”, “person”, “dog”, to every pixel in an image enables numerous new applications, such as the synthetic shallow depth-of-field effect shipped in the portrait mode of the Pixel 2 and Pixel 2 XL smartphones and mobile real-time video segmentation. Assigning these semantic labels requires pinpointing the outline of objects, and thus imposes much stricter localization accuracy requirements than other visual entity recognition tasks such as image-level classification or bounding box-level detection.

Today, we are excited to announce the open source release of our latest and best performing semantic image segmentation model, DeepLab-v3+ [1]*, implemented in TensorFlow. This release includes DeepLab-v3+ models built on top of a powerful convolutional neural network (CNN) backbone architecture [2, 3] for the most accurate results, intended for server-side deployment. As part of this release, we are additionally sharing our TensorFlow model training and evaluation code, as well as models already pre-trained on the Pascal VOC 2012 and Cityscapes benchmark semantic segmentation tasks.

Since the first incarnation of our DeepLab model [4] three years ago, improved CNN feature extractors, better object scale modeling, careful assimilation of contextual information, improved training procedures, and increasingly powerful hardware and software have led to improvements with DeepLab-v2 [5] and DeepLab-v3 [6]. With DeepLab-v3+, we extend DeepLab-v3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further apply the depthwise separable convolution to both atrous spatial pyramid pooling [5, 6] and decoder modules, resulting in a faster and stronger encoder-decoder network for semantic segmentation.

Modern semantic image segmentation systems built on top of convolutional neural networks (CNNs) have reached accuracy levels that were hard to imagine even five years ago, thanks to advances in methods, hardware, and datasets. We hope that publicly sharing our system with the community will make it easier for other groups in academia and industry to reproduce and further improve upon state-of-art systems, train models on new datasets, and envision new applications for this technology.

We would like to thank the support and valuable discussions with Iasonas Kokkinos, Kevin Murphy, Alan L. Yuille (co-authors of DeepLab-v1 and -v2), as well as Mark Sandler, Andrew Howard, Menglong Zhu, Chen Sun, Derek Chow, Andre Araujo, Haozhi Qi, Jifeng Dai, and the Google Mobile Vision team.


  1. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation, Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam, arXiv: 1802.02611, 2018.
  2. Xception: Deep Learning with Depthwise Separable Convolutions, François Chollet, Proc. of CVPR, 2017.
  3. Deformable Convolutional Networks — COCO Detection and Segmentation Challenge 2017 Entry, Haozhi Qi, Zheng Zhang, Bin Xiao, Han Hu, Bowen Cheng, Yichen Wei, and Jifeng Dai, ICCV COCO Challenge Workshop, 2017.
  4. Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs, Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille, Proc. of ICLR, 2015.
  5. Deeplab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs, Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille, TPAMI, 2017.
  6. Rethinking Atrous Convolution for Semantic Image Segmentation, Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam, arXiv:1706.05587, 2017.

* DeepLab-v3+ is not used to power Pixel 2′s portrait mode or real time video segmentation. These are mentioned in the post as examples of features this type of technology can enable.

Continua a leggere

Pubblicato in Senza categoria

BEC 2018 Recap

Well another great BEC Conference is now in the books and once again it was a major winner.   Like the many I have attended previously I left very fired up about what I learned and whom I got to meet and spend time with.  This year the attendee numbers were strong, more than 500 were there and they were active, either in the conference area or in the mini trade show spot.  This was the first year since those crazy pre-recession days that the numbers were that impressive. 

–  As for recapping there were a few things that stuck out.  The opening panel, that featured leaders from the design, manufacturing, installation, and contracting world was off the charts.  It could have easily taken up the entire morning session, especially with quite a few questions that were still waiting in the queue.  So lesson learned on the planning end- something that good, will need more time.  Tom Jackson, President of Steel Encounters did an absolutely fantastic job on the world of employee relations, culture, finding and retaining the workforce. He had one stat that I discussed with a lot of attendees and so I need to share here…

“95% of job candidates believe culture is more important than compensation”

–  That still blows me away but also shows I am old fashioned…. And speaking of old, the keynote (thanks to Guardian Glass and an inspired choice by Chris Dolan) was Jeff Havens and he provided an incredibly energetic and entertaining approach to generational differences in the workplace… old vs. young… and my gosh I am now officially old. 

–  Overall the presentations were excellent with a ton of different subjects to satisfy so much of what the modern glazing contractor or installer could need.  The technical meeting, chaired by the impressive Matt Kamper of Woodbridge Glass was interesting.  I always enjoy the ins and outs of it but the fact that NFRC was covered in detail cracked me up.  I have been in the NFRC mix since 2004!!  And we’re still talking about the same basic things.  Just incredible really. 

–  As always the networking makes the event. The Sunday night reception was awesome- the room was jam-packed and when the reception ended it was still busy with the hotel management trying several different moves to get people to clear out.  That is always the sign of a good party.  And yes I stayed til the end. (That never, ever happens if you know me.)

–  Before I run into whom I visited with, I have to give props to Gus Trupiano of AGC for leading this event as the chair of the BEC division.  Gus is not only an excellent and classy man, but he’s also a great leader who did the industry proud once again.  Kudos as well to Sara Neiswanger of GANA/NGA for her tireless work on this- she does so much behind the scenes, and does it with great care & skill.

–  As for the networking… it was fun to fly on a plane loaded with industry folks, poor Joe Erb of Quanex got stuck in the middle seat next to me for 4-1/2 hours.  He deserves a medal.  Plus the team from Guardian Glass was on board and I do sincerely enjoy chatting with them any chance I get.  Once in Vegas it was great to see Bill Sullivan of Brin Glass, he’s a tremendous supporter of the industry and it is appreciated.  In that same boat are people like Chuck Knickerbocker of TGP and Jon Kimberlain of Dow- I love what they do and getting a few minutes with each of them is a great honor for me. 

–  The talent on display at this event is really crazy- people like Gary McQueen of JE Berkowitz, Rob Carlson of Tristar, and Ian Patlin of Paragon are so impressive to me.  And my friend Shelly Farmer of Trex Commercial never disappoints, she’s always top of her game and doing great things.  It’s well known I am a fan of the Viracon guys, Garret Henson, Seth Madole, and Cameron Scripture- brilliant and good people too. 

–  I like meeting new people and learning new things too… It was great to meet Charles Alexander, the newest addition at Walker Glass (though saying goodbye to Marc Deschamps was VERY hard for me) and meeting Joffy Thompson and John Vissari of United Plate Glass was incredible.  Good, sharp guys for sure.  As for new things, I learned about the new, exciting unitized product from Kurt Levan and Joel Phelps of Entekk- that was very cool.  Best of luck to them.

–  Got to chat with Chris Knitter of Oahu Metal & Glazing for the first time in a few years and same with Maure Creager & Tim FInley of SAGE Glass  (Side note- SAGE has the coolest business cards- props to Derek Malmquist on that) I only see Tracy Robbins of Walters & Wolf at this event, and I am glad I always do, good guy!  Running into a former co-worker of mine Wardi Bisharat of PRL was fantastic- she rocks as always.

–  Any time I get with the great Rich Porayko is a blessing for me.  I got to tell the “how I met my wife” story to Bob Burkhammer of Giroux and his wonderful wife, and I spent some quality time with Bernard Lax of Pulp which I value a ton. 

–  The event was so huge I did not see a lot of people I wanted to see.  I barely saw Tim McGee of Glass Coatings and Concepts and I missed Tom O’Malley of Clover Architectural completely.  I so badly wanted to hear how great things are going for him, as I see Clover everywhere these days!  I also missed visiting with the Vitro folks and missed a few opportunities to catch up with old friend Tim Moore of Standard Bent.

–  So it’s now on to the next events… for me its most likely GlassBuild as I do not think I am attending AIA…. And I am very excited about GlassBuild based on the vibes just experienced at BEC.  We have a lot of positivity flowing in our industry right now, so let’s keep at it!


You don’t always get rewarded for doing something nice or right, (as that’s not why you do nice things) so it’s very neat to see when it does happen!
If any of you think you can be stealth and private?  Not a chance… everyone is watching.
Jury Duty needs… and if anyone wants to hear my classic jury duty story, just ask… it’s a favorite of mine…

I mentioned how energetic and entertaining the keynote was at BEC and found a quick video of him online… this gives you a flavor.  Good stuff!

Continua a leggere

Pubblicato in Senza categoria

Introducing the iNaturalist 2018 Challenge

Posted by Yang Song, Staff Software Engineer and Serge Belongie, Visiting Faculty, Google Research

Thanks to recent advances in deep learning, the visual recognition abilities of machines have improved dramatically, permitting the practical application of computer vision to tasks ranging from pedestrian detection for self-driving cars to expression recognition in virtual reality. One area that remains challenging for computers, however, is fine-grained and instance-level recognition. Earlier this month, we posted an instance-level landmark recognition challenge for identifying individual landmarks. Here we focus on fine-grained visual recognition, which is to distinguish species of animals and plants, car and motorcycle models, architectural styles, etc. For computers, discriminating fine-grained categories is challenging because many categories have relatively few training examples (i.e., the long tail problem), the examples that do exist often lack authoritative training labels, and there is variability in illumination, viewing angle and object occlusion.

To help confront these hurdles, we are excited to announce the 2018 iNaturalist Challenge (iNat-2018), a species classification competition offered in partnership with iNaturalist and Visipedia (short for Visual Encyclopedia), a project for which Caltech and Cornell Tech received a Google Focused Research Award. This is a flagship challenge for the 5th International Workshop on Fine Grained Visual Categorization (FGVC5) at CVPR 2018. Building upon the first iNaturalist challenge, iNat-2017, iNat-2018 spans over 8000 categories of plants, animals, and fungi, with a total of more than 450,000 training images. We invite participants to enter the competition on Kaggle, with final submissions due in early June. Training data, annotations, and links to pretrained models can be found on our GitHub repo.

iNaturalist has emerged as a world leader for citizen scientists to share observations of species and connect with nature since its founding in 2008. It hosts research-grade photos and annotations submitted by a thriving, engaged community of users. Consider the following photo from iNaturalist:

The map on the right shows where the photo was taken. Image credit: Serge Belongie.

You may notice that the photo on the left contains a turtle. But did you also know this is a Trachemys scripta, common name “Pond Slider?” If you knew the latter, you possess knowledge of fine-grained or subordinate categories.

In contrast to other image classification datasets such as ImageNet, the dataset in the iNaturalist challenge exhibits a long-tailed distribution, with many species having relatively few images. It is important to enable machine learning models to handle categories in the long-tail, as the natural world is heavily imbalanced – some species are more abundant and easier to photograph than others. The iNaturalist challenge will encourage progress because the training distribution of iNat-2018 has an even longer tail than iNat-2017.

Distribution of training images per species for iNat-2017 and iNat-2018, plotted on a log-linear scale, illustrating the long-tail behavior typical of fine-grained classification problems. Image Credit: Grant Van Horn and Oisin Mac Aodha.

Along with iNat-2018, FGVC5 will also host the iMaterialist 2018 challenge (including a furniture categorization challenge and a fashion attributes challenge for product images) and a set of “FGVCx” challenges representing smaller scale – but still significant – challenges, featuring content such as food and modern art.

FGVC5 will be showcased on the main stage at CVPR 2018, thereby ensuring broad exposure for the top performing teams. This project will advance the state-of-the-art in automatic image classification for real world, fine-grained categories, with heavy class imbalances, and large numbers of classes. We cordially invite you to participate in these competitions and help move the field forward!

We’d like to thank our colleagues and friends at iNaturalist, Visipedia, and FGVC5 for working together to advance this important area. At Google we would like to thank Hartwig Adam, Weijun Wang, Nathan Frey, Andrew Howard, Alessandro Fin, Yuning Chai, Xiao Zhang, Jack Sim, Yuan Li, Grant Van Horn, Yin Cui, Chen Sun, Yanan Qian, Grace Vesom, Tanya Birch, Celeste Chung, Wendy Kan, and Maggie Demkin.

Continua a leggere

Pubblicato in Senza categoria

Hot Toys MMS476 Avengers: Infinity War 1/6th scale Groot & Rocket collectible figures set


Get ready for the return of Rocket and Groot in the epic Avengers: Infinity War, which unveils a first look of the Guardians teaming up with the Avengers! Under the care of the Guardians, Groot continues to prove himself as a dependable hero while battling alongside with his comrade Rocket! Sideshow and Hot Toys are delighted to present the sixth scale collectible set of Groot and Rocket from Marvel’s film Avengers: Infinity War.

The newly developed Groot is expertly crafted based on his appearance in the film, featuring a finely sculpted head with 2 interchangeable face sculpts, impressive paint application on his body reflecting his distinctive appearance, blaster rifle, handheld game console and a movie-themed figure stand with movie logo.

The movie-accurate Rocket is specially crafted based on his unique physique in the film, it features a newly painted head portraying his roaring expression with a remarkable likeness, specially tailored combat suit, interchangeable hands and feet, all-new highly detailed blaster rifle, and a specially designed movie-themed figure with movie logo.


Scroll down to see all the pictures.
Click on them for bigger and better views.

Hot Toys Avengers: Infinity War Groot Sixth Scale Collectible Figure specially features: Authentic and detailed likeness of Groot in Marvel Studio’s Avengers: Infinity War | Two (2) newly developed interchangeable face sculpts with movie-accurate facial expression and tree texture | Approximately 29.5 cm tall Newly developed unique body with over 15 points of articulation | Five (5) pieces of interchangeable hands including: pair of relaxed hands, partially clenched left hand, weapon holding right hand, attacking right hand

Weapon: blaster rifle

Accessories: handheld game console

Hot Toys Avengers: Infinity War Rocket Sixth Scale Collectible Figure specially features: Newly painted roaring expression head sculpt with an authentic and detailed likeness of Rocket from Avengers: Infinity War | Movie-accurate facial expression and detailed fur texture | Approximately 16 cm tall Specialized body with over 17 points of articulation | Three (3) pairs of interchangeable hands including: pair of relaxed hands, pair of fists, pair of hands for holding a blaster rifle | Two (2) pairs of interchangeable feet including: pair of feet for standing, pair of feet in a flying stance

Costume: navy blue space suit, utility belt with pouches, gun strap (wearable on the back)

Weapon: blaster rifle

Accessory: Specially designed movie-themed figure stands and movie logo


Release date: Approximately Q4, 2018 – Q1, 2019

Continua a leggere

Pubblicato in Senza categoria

Open Sourcing the Hunt for Exoplanets

Posted by Chris Shallue, Senior Software Engineer, Google Brain Team

(Crossposted on the Google Open Source Blog)

Recently, we discovered two exoplanets by training a neural network to analyze data from NASA’s Kepler space telescope and accurately identify the most promising planet signals. And while this was only an initial analysis of ~700 stars, we consider this a successful proof-of-concept for using machine learning to discover exoplanets, and more generally another example of using machine learning to make meaningful gains in a variety of scientific disciplines (e.g. healthcare, quantum chemistry, and fusion research).

Today, we’re excited to release our code for processing the Kepler data, training our neural network model, and making predictions about new candidate signals. We hope this release will prove a useful starting point for developing similar models for other NASA missions, like K2 (Kepler’s second mission) and the upcoming Transiting Exoplanet Survey Satellite mission. As well as announcing the release of our code, we’d also like take this opportunity to dig a bit deeper into how our model works.

A Planet Hunting Primer
First, let’s consider how data collected by the Kepler telescope is used to detect the presence of a planet. The plot below is called a light curve, and it shows the brightness of the star (as measured by Kepler’s photometer) over time. When a planet passes in front of the star, it temporarily blocks some of the light, which causes the measured brightness to decrease and then increase again shortly thereafter, causing a “U-shaped” dip in the light curve.

A light curve from the Kepler space telescope with a “U-shaped” dip that indicates a transiting exoplanet.

However, other astronomical and instrumental phenomena can also cause the measured brightness of a star to decrease, including binary star systems, starspots, cosmic ray hits on Kepler’s photometer, and instrumental noise.

The first light curve has a “V-shaped” pattern that tells us that a very large object (i.e. another star) passed in front of the star that Kepler was observing. The second light curve contains two places where the brightness decreases, which indicates a binary system with one bright and one dim star: the larger dip is caused by the dimmer star passing in front of the brighter star, and vice versa. The third light curve is one example of the many other non-planet signals where the measured brightness of a star appears to decrease.

To search for planets in Kepler data, scientists use automated software (e.g. the Kepler data processing pipeline) to detect signals that might be caused by planets, and then manually follow up to decide whether each signal is a planet or a false positive. To avoid being overwhelmed with more signals than they can manage, the scientists apply a cutoff to the automated detections: those with signal-to-noise ratios above a fixed threshold are deemed worthy of follow-up analysis, while all detections below the threshold are discarded. Even with this cutoff, the number of detections is still formidable: to date, over 30,000 detected Kepler signals have been manually examined, and about 2,500 of those have been validated as actual planets!

Perhaps you’re wondering: does the signal-to-noise cutoff cause some real planet signals to be missed? The answer is, yes! However, if astronomers need to manually follow up on every detection, it’s not really worthwhile to lower the threshold, because as the threshold decreases the rate of false positive detections increases rapidly and actual planet detections become increasingly rare. However, there’s a tantalizing incentive: it’s possible that some potentially habitable planets like Earth, which are relatively small and orbit around relatively dim stars, might be hiding just below the traditional detection threshold — there might be hidden gems still undiscovered in the Kepler data!

A Machine Learning Approach
The Google Brain team applies machine learning to a diverse variety of data, from human genomes to sketches to formal mathematical logic. Considering the massive amount of data collected by the Kepler telescope, we wondered what we might find if we used machine learning to analyze some of the previously unexplored Kepler data. To find out, we teamed up with Andrew Vanderburg at UT Austin and developed a neural network to help search the low signal-to-noise detections for planets.

We trained a convolutional neural network (CNN) to predict the probability that a given Kepler signal is caused by a planet. We chose a CNN because they have been very successful in other problems with spatial and/or temporal structure, like audio generation and image classification.

Luckily, we had 30,000 Kepler signals that had already been manually examined and classified by humans. We used a subset of around 15,000 of these signals, of which around 3,500 were verified planets or strong planet candidates, to train our neural network to distinguish planets from false positives. The inputs to our network are two separate views of the same light curve: a wide view that allows the model to examine signals elsewhere on the light curve (e.g., a secondary signal caused by a binary star), and a zoomed-in view that enables the model to closely examine the shape of the detected signal (e.g., to distinguish “U-shaped” signals from “V-shaped” signals).

Once we had trained our model, we investigated the features it learned about light curves to see if they matched with our expectations. One technique we used (originally suggested in this paper) was to systematically occlude small regions of the input light curves to see whether the model’s output changed. Regions that are particularly important to the model’s decision will change the output prediction if they are occluded, but occluding unimportant regions will not have a significant effect. Below is a light curve from a binary star that our model correctly predicts is not a planet. The points highlighted in green are the points that most change the model’s output prediction when occluded, and they correspond exactly to the secondary “dip” indicative of a binary system. When those points are occluded, the model’s output prediction changes from ~0% probability of being a planet to ~40% probability of being a planet. So, those points are part of the reason the model rejects this light curve, but the model uses other evidence as well – for example, zooming in on the centred primary dip shows that it’s actually “V-shaped”, which is also indicative of a binary system.

Searching for New Planets
Once we were confident with our model’s predictions, we tested its effectiveness by searching for new planets in a small set 670 stars. We chose these stars because they were already known to have multiple orbiting planets, and we believed that some of these stars might host additional planets that had not yet been detected. Importantly, we allowed our search to include signals that were below the signal-to-noise threshold that astronomers had previously considered. As expected, our neural network rejected most of these signals as spurious detections, but a handful of promising candidates rose to the top, including our two newly discovered planets: Kepler-90 i and Kepler-80 g.

Find your own Planet(s)!
Let’s take a look at how the code released today can help (re-)discover the planet Kepler-90 i. The first step is to train a model by following the instructions on the code’s home page. It takes a while to download and process the data from the Kepler telescope, but once that’s done, it’s relatively fast to train a model and make predictions about new signals. One way to find new signals to show the model is to use an algorithm called Box Least Squares (BLS), which searches for periodic “box shaped” dips in brightness (see below). The BLS algorithm will detect “U-shaped” planet signals, “V-shaped” binary star signals and many other types of false positive signals to show the model. There are various freely available software implementations of the BLS algorithm, including VARTOOLS and LcTools. Alternatively, you can even look for candidate planet transits by eye, like the Planet Hunters.

A low signal-to-noise detection in the light curve of the Kepler 90 star detected by the BLS algorithm. The detection has period 14.44912 days, duration 2.70408 hours (0.11267 days) beginning 2.2 days after 12:00 on 1/1/2009 (the year the Kepler telescope launched).

To run this detected signal though our trained model, we simply execute the following command:

python  --kepler_id=11442793 --period=14.44912 --t0=2.2
--duration=0.11267 --kepler_data_dir=$HOME/astronet/kepler

The output of the command is prediction = 0.94, which means the model is 94% certain that this signal is a real planet. Of course, this is only a small step in the overall process of discovering and validating an exoplanet: the model’s prediction is not proof one way or the other. The process of validating this signal as a real exoplanet requires significant follow-up work by an expert astronomer — see Sections 6.3 and 6.4 of our paper for the full details. In this particular case, our follow-up analysis validated this signal as a bona fide exoplanet, and it’s now called Kepler-90 i!

Our work here is far from done. We’ve only searched 670 stars out of 200,000 observed by Kepler — who knows what we might find when we turn our technique to the entire dataset. Before we do that, though, we have a few improvements we want to make to our model. As we discussed in our paper, our model is not yet as good at rejecting binary stars and instrumental false positives as some more mature computer heuristics. We’re hard at work improving our model, and now that it’s open sourced, we hope others will do the same!

If you’d like to learn more, Chris is featured on the latest episode of This Week In Machine Learning & AI discussing his work.

Continua a leggere

Pubblicato in Senza categoria