Cloud Strife is a fictional character and the main protagonist of Square’s 1997 (now Square Enix’s) role-playing video game Final Fantasy VII and several of its sequels and spin-offs. In Final Fantasy VII, Cloud is a mercenary claiming to be formerly o… Continua a leggere
From Joëlle Jones, the superstar artist who took Catwoman into battle comes this brand-new statue from the DC COVER GIRLS line. Selina Kyle lounges on a vault she’s ready to crack open with a few flicks of her claws.
Inspired by the powerful women of the DC Universe, DC COVER GIRLS is a long-running line that features dynamic depictions of the most famous superheroines and super-villains in premium 9-inch scale statues.
This DC BOMBSHELLS statue presents a 1940s spin on a comic book romance that has spanned generations.
Inspired by the art of internationally renowned designer and illustrator Ant Lucia and sculpted by Jack Mathews, this statue may look like Catwoman’s going in for a kiss, but a closer look at her hands shows her sights set on the Batmobile! This statue measures approximately 11″ tall and is limited to 5,000 pieces.
Inspired by vintage pinup art, the DC BOMBSHELLS statue line features DC superheroes during World War II and launched the DC Comics series of the same name.
“A war is raging between what’s left of the human race and Skynet. Both sides eagerly search for any advantage… The Rebel Terminator is born.”
Sideshow is thrilled to present the Rebel Terminator – Mythos Premium Format™ Figure, a powerful new ally in the struggle against Skynet.
As a collection, Sideshow’s Mythos series captures the limitless possibility of fan-favorite franchises by emphasizing core themes and introducing unique story details to popular fictional universes. The Rebel Terminator emerges as an exciting new figure whose story blends seamlessly into the mythology of the Terminator franchise, personifying the spirit of technology and resistance to spark the imagination of fans everywhere.
The Rebel Terminator stands 19.5” tall on a cracked battlefield base littered with debris and decommissioned endoskeleton skulls. Battle damage from her deadly hunts reveals the renegade assassin’s own mechanical features- an exposed arm and piercing red eye tell the story of the Rebel Terminator’s mysterious past.
The resin Rebel Terminator – Mythos Premium Format™ Figure features an incredibly detailed sculpted costume, consisting of a black jacket, white undershirt, green pants, and combat boots. Her rugged look is textured with tatters and tears to reflect the raging battle against the oppressive forces of Skynet.
The Rebel Terminator shows no signs of slowing down in her directive, clutching a severed endo-skull in her mechanical hand. Already preparing to eliminate her next cybernetic target, she holds a 40-watt plasma rifle triumphantly over her shoulder. Another plasma rifle hangs at her back, amid a variety of tactical gear and pouches to protect her human façade from further destruction.
Posted by Bo Chen, Software Engineer and Jeffrey M. Gilbert, Member of Technical Staff, Google Research
Over the past year, there have been exciting innovations in the design of deep networks for vision applications on mobile devices, such as the MobileNet model family and integer quantization. Many of these innovations have been driven by performance metrics that focus on meaningful user experiences in real-world mobile applications, requiring inference to be both low-latency and accurate. While the accuracy of a deep network model can be conveniently estimated with well established benchmarks in the computer vision community, latency is surprisingly difficult to measure and no uniform metric has been established. This lack of measurement platforms and uniform metrics have hampered the development of performant mobile applications.
Today, we are happy to announce the On-device Visual Intelligence Challenge (OVIC), part of the Low-Power Image Recognition Challenge Workshop at the 2018 Computer Vision and Pattern Recognition conference (CVPR2018). A collaboration with Purdue University, the University of North Carolina and IEEE, OVIC is a public competition for real-time image classification that uses state-of-the-art Google technology to significantly lower the barrier to entry for mobile development. OVIC provides two key features to catalyze innovation: a unified latency metric and an evaluation platform.
A Unified Metric
OVIC focuses on the establishment of a unified metric aligned directly with accurate and performant operation on mobile devices. The metric is defined as the number of correct classifications within a specified per-image average time limit of 33ms. This latency limit allows every frame in a live 30 frames-per-second video to be processed, thus providing a seamless user experience1. Prior to OVIC, it was tricky to enforce such a limit due to the difficulty in accurately and uniformly measuring latency as would be experienced in real-world applications on real-world devices. Without a repeatable mobile development platform, researchers have relied primarily on approximate metrics for latency that are convenient to compute, such as the number of multiply-accumulate operations (MACs). The intuition is that multiply-accumulate constitutes the most time-consuming operation in a deep neural network, so their count should be indicative of the overall latency. However, these metrics are often poor predictors of on-device latency due to many aspects of the models that can impact the average latency of each MAC in typical implementations.
The graph above shows that while the number of MACs is correlated with the inference latency, there is significant variation in the mapping. Thus number of MACs is a poor proxy for latency, and since latency directly affects users’ experiences, we believe it is paramount to optimize latency directly rather than focusing on limiting the number of MACs as a proxy.
An Evaluation Platform
As mentioned above, a primary issue with latency is that it has previously been challenging to measure reliably and repeatably, due to variations in implementation, running environment and hardware architectures. Recent successes in mobile development overcome these challenges with the help of a convenient mobile development platform, including optimized kernels for mobile CPUs, light-weight portable model formats, increasingly capable mobile devices, and more. However, these various platforms have traditionally required resources and development capabilities that are only available to larger universities and industry.
With that in mind, we are releasing OVIC’s evaluation platform that includes a number of components designed to make mobile development and evaluations that can be replicated and compared accessible to the broader research community:
- TOCO compiler for optimizing TensorFlow models for efficient inference
- TensorFlow Lite inference engine for mobile deployment
- A benchmarking SDK that can be run locally on any Android phone
- Sample models to showcase successful mobile architectures that run inference in floating-point and quantized modes
- Google’s benchmarking tool for reliable latency measurements on specific Pixel phones (available to registered contestants).
Using these tools available in OVIC, a participant can conveniently incorporate measurement of on-device latency into their design loop without having to worry about optimizing kernels, purchasing latency/power measurement devices, or designing the framework to drive them. The only requirement for entry is experiences with training computer vision models in TensorFlow, which can be found in this tutorial.
With OVIC, we encourage the entire research community to improve the classification performance of low-latency high-accuracy models towards new frontiers, as shown in the following graphic.
|Sampling of current MobileNet mobile models illustrating the tradeoff between increased accuracy and reduced latency.|
We cordially invite you to participate here before the deadline on June 15th, and help us discover new mobile vision architectures that will propel development into the future.
We would like to acknowledge our core contributors Achille Brighton, Alec Go, Andrew Howard, Hartwig Adam, Mark Sandler and Xiao Zhang. We would also like to acknowledge our external collaborators Alex Berg and Yung-Hsiang Lu. We give special thanks to Andre Hentz, Andrew Selle, Benoit Jacob, Brad Krueger, Dmitry Kalenichenko, Megan Cummins, Pete Warden, Rajat Monga, Shiyu Hu and Yicheng Fan.
“Oh, we’re using our made-up names? In that case, I am Spider-Man.”
This changes everything. The whole world has geared up for the upcoming Marvel blockbuster Avengers: Infinity War that arrives in less than one week time. The friendly neighborhood Spider-Man will have to use his wit, strength, and Spider Sense to help the other heroes stop the warlord from enacting his master plan to collect all the powerful Infinity Stones.
Being the movie in Marvel Cinematic Universe that has been all leading to, the excitement over this massive battle is at a fever pitch and today Hot Toys is thrilled to present the highly anticipated groundbreaking 1/6th scale Iron Spider Collectible Figure which has received a lot of positive reviews after it made its debut at Avengers: Infinity War exhibition powered by Hot Toys!
Designed by Tony Stark, the impressive brand new futuristic Iron Spider Suit has equipped with the latest amazing high-tech weapons. Expertly crafted based on the stylish appearance of Iron Spider with the most up-to-date details in the movie, the collectible figure features three interchangeable heads including a newly developed interchangeable masked head with LED light-up function, a masked head sculpt with four pairs of interchangeable eye pieces to create numerous combinations of Spider-Man’s expressions and a newly painted interchangeable head sculpt featuring remarkable likeness of Tom Holland, a newly developed specialized body, a skillfully tailored metallic red and dark blue-colored Iron Spider suit with gold-colored trims perfectly capturing all the tiniest details, two pairs of articulated Iron Spider pincers with stylish gold-colored painting, a variety of spider-web shooting effect parts and a movie themed dynamic figure stand.
Hot Toys MMS482 1/6th scale Iron Spider Collectible Figure specially features: Authentic and detailed likeness of Iron Spider in Avengers: Infinity War | newly developed interchangeable masked head with LED light-up function (white light, battery operated) | newly developed masked head sculpt with four (4) sets of interchangeable eyepieces that can create numerous combination of expressions | newly painted interchangeable head sculpt with authentic likeness of Tom Holland as Peter Parker | Approximately 28.5 cm tall Newly developed body with 30 points of articulation | Twelve (12) pieces of interchangeable hands including: pair of fists, pair of relaxed hands, pair of palms for cobweb shooting, pair of palms for cobweb swinging, pair of open hands, pair of gesturing hands
Costume: newly developed metallic red and dark blue colored Spider-Man suit with gold trims, embossed cobweb pattern and dark blue spider emblem on chest | pair of red-colored boots with gold trims and embossed cobweb pattern | Two (2) pairs of detachable gold-colored articulated Iron Spider pincers
Accessories: Spider-Man mask (not wearable on figure) | open spider web effect accessory | Four (4) strings of spider web in different shapes and lengths, attachable to the web-shooters | specially designed Avengers: Infinity War themed dynamic figure stand with movie logo
Release date: Approximately Q1 – Q2, 2019
Posted by Pi-Chuan Chang, Software Engineer and Lizzie Dorfman, Technical Program Manager, Google Brain Team
Last December we released DeepVariant, a deep learning model that has been trained to analyze genetic sequences and accurately identify the differences, known as variants, that make us all unique. Our initial post focused on how DeepVariant approaches “variant calling” as an image classification problem, and is able to achieve greater accuracy than previous methods.
Today we are pleased to announce the launch of DeepVariant v0.6, which includes some major accuracy improvements. In this post we describe how we train DeepVariant, and how we were able to improve DeepVariant’s accuracy for two common sequencing scenarios, whole exome sequencing and polymerase chain reaction sequencing, simply by adding representative data into DeepVariant’s training process.
Many Types of Sequencing Data
Approaches to genomic sequencing vary depending on the type of DNA sample (e.g., from blood or saliva), how the DNA was processed (e.g., amplification techniques), which technology was used to sequence the data (e.g., instruments can vary even within the same manufacturer) and what section or how much of the genome was sequenced. These differences result in a very large number of sequencing “datatypes”.
Typically, variant calling tools have been tuned for one specific datatype and perform relatively poorly on others. Given the extensive time and expertise involved in tuning variant callers for new datatypes, it seemed infeasible to customize each tool for every one. In contrast, with DeepVariant we are able to improve accuracy for new datatypes simply by including representative data in the training process, without negatively impacting overall performance.
Truth Sets for Variant Calling
Deep learning models depend on having high quality data for training and evaluation. In the field of genomics, the Genome in a Bottle (GIAB) consortium, which is hosted by the National Institute of Standards and Technology (NIST), produces human genomes for use in technology development, evaluation, and optimization. The benefit of working with GIAB benchmarking genomes is that their true sequence is known (at least to the extent currently possible). To achieve this, GIAB takes a single person’s DNA and repeatedly sequences it using a wide variety of laboratory methods and sequencing technologies (i.e. many datatypes) and analyzes the resulting data using many different variant calling tools. A tremendous amount of work then follows to evaluate and adjudicate discrepancies to produce a high-confidence “truth set” for each genome.
The majority of DeepVariant’s training data is from the first benchmarking genome released by GIAB, HG001. The sample, from a woman of northern European ancestry, was made available as part of the International HapMap Project, the first large-scale effort to identify common patterns of human genetic variation. Because DNA from HG001 is commercially available and so well characterized, it is often the first sample used to test new sequencing technologies and variant calling tools. By using many replicates and different datatypes of HG001, we can generate millions of training examples which helps DeepVariant learn to accurately classify many datatypes, and even generalize to datatypes it has never seen before.
Improved Exome Model in v0.5
In the v0.5 release we formalized a benchmarking-compatible training strategy to withhold from training a complete sample, HG002, as well as any data from chromosome 20. HG002, the second benchmarking genome released by GIAB, is from a male of Ashkenazi Jewish ancestry. Testing on this sample, which differs in both sex and ethnicity from HG001, helps to ensure that DeepVariant is performing well for diverse populations. Additionally reserving chromosome 20 for testing guarantees that we can evaluate DeepVariant’s accuracy for any datatype that has truth data available.
In v0.5 we also focused on exome data, which is the subset of the genome that directly codes for proteins. The exome is only ~1% of the whole human genome, so whole exome sequencing (WES) costs less than whole genome sequencing (WGS). The exome also harbors many variants of clinical significance which makes it useful for both researchers and clinicians. To increase exome accuracy we added a variety of WES datatypes, provided by DNAnexus, to DeepVariant’s training data. The v0.5 WES model shows 43% fewer indel (insertion-deletion) errors and a 22% reduction in single nucleotide polymorphism (SNP) errors.
Improved Whole Genome Sequencing Model for PCR+ data in v0.6
Our newest release of DeepVariant, v0.6, focuses on improved accuracy for data that has undergone DNA amplification via polymerase chain reaction (PCR) prior to sequencing. PCR is an easy and inexpensive way to amplify very small quantities of DNA, and once sequenced results in what is known as PCR positive (PCR+) sequencing data. It is well known, however, that PCR can be prone to bias and errors, and non-PCR-based (or PCR-free) DNA preparation methods are increasingly common. DeepVariant’s training data prior to the v0.6 release was exclusively PCR-free data, and PCR+ was one of the few datatypes for which DeepVariant had underperformed in external evaluations. By adding PCR+ examples to DeepVariant’s training data, also provided by DNAnexus, we have seen significant accuracy improvements for this datatype, including a 60% reduction in indel errors.
|DeepVariant v0.6 shows major accuracy improvements for PCR+ data, largely attributable to a reduction in indel errors. Here we re-analyze two PCR+ samples that were used in external evaluations, including DNAnexus on the left (see details in figure 10) and bcbio on the right, showing how indel accuracy improves with each DeepVariant version.|
Independent evaluations of DeepVariant v0.6 from both DNAnexus and bcbio are also available. Their analyses support our findings of improved indel accuracy, and also include comparisons to other variant calling tools.
We released DeepVariant as open source software to encourage collaboration and to accelerate the use of this technology to solve real world problems. As the pace of innovation in sequencing technologies continues to grow, including more clinical applications, we are optimistic that DeepVariant can be further extended to produce consistent and highly accurate results. We hope that researchers will use DeepVariant v0.6 to accelerate discoveries, and if there is a sequencing datatype that you would like to see us prioritize, please let us know.
The SCDF Exoskeleton, developed jointly with local engineering company Hope Technik and the Ministry of Home Affairs, was unveiled at its annual workplan seminar on Wednesday (April 18). A frame to be worn by firefighters, the Exoskeleton bears the wei… Continua a leggere
Hercules is a 2014 American 3D action fantasy adventure film directed by Brett Ratner, written by Ryan J. Condal and Evan Spiliotopoulos and starring Dwayne Johnson, Ian McShane, Rufus Sewell and John Hurt. It is based on the graphic novel Hercules: The Thracian Wars. Dwayne Johnson stars as Hercules the son of Zeus.
Hercules, son of Zeus and the mortal Alcmene is one of the most famous Greek heroes of all time. Angered by Zeus’ betrayal, Hera sent two snakes to kill Hercules in his crib while he was still an infant. But Hercules was so incredibly strong and fearless that he strangled the snakes before they could kill him. Later Hercules was ordered by Apollo to perform 12 heroic labors that made him famous and granted him immortality among the God of Olympus.
TBLeague 1/6th scale Hercules Action Figure features: head sculpt, TB League male seamless body with metal skeleton, 3 pairs x interchangeable hands, lion headgear, pair of feet in shoes with long shafts, belt, waistband, forearm armors, right shoulder armor, strap for upper body, shield, men’s brief, leather skirt, long sword, wolf’s fangs mace, dagger with sheath, base