Hot Toys Ant-Man and the Wasp 1/6th scale Evangeline Lilly as The Wasp Collectible Figure

“Thanks to you, we had to run. We’re still running.”

The Third Marvel Cinematic Universe entry this year Ant-Man and the Wasp is arriving at the theatre in just a few days! In this upcoming sequel, a newly debuted character with special ability to fly at great speeds, Wasp is teaming up with Ant-Man for an urgent new mission from Dr. Hank Pym! Whether big or small, Wasp’s strength grows to superhuman levels when she uses her powers. Making a grand entrance in Ant-Man and the Wasp, Hot Toys is delighted to present to you today the Wasp in 1/6th scale Collectible Figure!

Beautifully crafted based on the appearance of Evangeline Lilly as Hope Van Dyne in the movie, the Wasp figure includes two interchangeable heads featuring a newly developed head sculpt with detailed pony tail and a completely new helmeted head sculpt with LED light-up function that bears a high resemblance of a closed helmet showing part of Wasp’s face, two pairs of interchangeable Wasp’s wings including a pair of stand by wings and a pair of articulated wings for varies flying poses, a striking brand new Wasp suit, a Wasp miniature figure, two disc equipment, an opened helmet accessory attachable to the back of figure, and a specially designed dynamic figure stand with character backdrop.

Scroll down to see the rest of the pictures.
Click on them for bigger and better views.

Hot Toys MMS498 Ant-Man and the Wasp 1/6th scale The Wasp Collectible Figure specially features: Authentic and detailed likeness of Wasp in Ant-Man | newly developed head sculpt with authentic likeness of Evangeline Lilly as Hope Van Dyne | newly developed helmeted head sculpt with LED light-up function (battery operated) | Movie-accurate facial expression with skin texture and make-up | Approximately 29 cm tall Newly developed body with over 28 points of articulations | Two (2) pairs of interchangeable Wasp’ wings including: pair of articulated wings, pair of stand by wings | Seven (7) pieces of interchangeable gloved hands including: pair of fists, pair of relaxed hands, pair of gesturing hands, left hand for holding disc equipment

Costume: metallic dark blue and brass-colored Wasp suit with embossed patterns, red-colored trims, and weathering effects | dark blue-colored boots

Accessories: attachable opened helmet accessory | miniature Wasp with stand (Approximately 2.9cm tall) | Two (2) disc equipment | Specially-designed dynamic figure stand with movie logo, character nameplate and a character backdrop

Release date: Approximately Q3 – Q4, 2019

Continua a leggere

Pubblicato in Senza categoria

Hot Toys Ant-Man and the Wasp 1/6th scale Paul Rudd as Ant-Man 12-inch Collectible Figure

“You know, I’m an Avengers now.”

Real heroes. Not actual size.

In a week’s time, the highly anticipated Marvel blockbuster Ant-Man and the Wasp is about to hit the big screen! Being a super hero and a full-time father, Scott is struggling to balance both sides in his daily life yet confronted by Dr. Hank Pym and Hope van Dyne with an urgent new mission. This time he has to put on the Ant-Man suit which allows him to grow or shrink and learn to fight alongside with The Wasp as team works together to uncover secrets from their hidden past.

In anticipation of the opening of this new sequel, Hot Toys is excited to present today the 1/6th scale Ant-Man Collectible Figure! The movie-accurate collectible figure is specially crafted based on the image of Paul Rudd as Scott Lang / Ant-Man in the movie. It includes two interchangeable heads featuring a newly developed head sculpt with stunning likeness and a newly developed interchangeable helmeted head sculpt with LED light-up function that bears a high resemblance of a closed helmet showing part of Ant-Man’s face, a skilfully tailored Ant-Man suit to enhance articulations, a standing Ant-Man miniature figure, a shrunken lab, two disc equipment, an opened helmet accessory attachable to the back of figure, a specially designed figure stand with character backdrop.

Scroll down to see all the pictures.
Click on them for bigger and better views.

Hot Toys MMS497 Ant-Man and the Wasp 1/6th scale Ant-Man Collectible Figure specially features: Authentic and detailed likeness of Ant-Man in Ant-Man | newly developed head sculpt with authentic likeness of Paul Rudd as Scott Lang | newly developed helmeted head sculpt with LED light-up function (battery operated) | Movie-accurate facial expression with detailed wrinkles, and skin texture | Approximately 30 cm tall Body with over 30 points of articulations | Six (6) pieces of interchangeable gloved hands including: pair of fists, pair of open hands, left hand for holding disc equipment, gesturing right hand

Costume: metallic red and black-colored Ant-Man suit with embossed patterns, silver colored trims, and weathering effects | silver-colored Ant-Man particle belt | black-colored boots

Accessories: miniature Ant-Man (Approximately 2.7cm tall) | shrunken lab | attachable opened helmet accessory | Two (2) disc equipment | Specially-designed figure stand with movie logo, character nameplate and a character backdrop

Check out the action figure review of Hot Toys MMS308 1/6th scale Ant-Man 12-inch Collectible Figure posted on my toy blog HERE and HERE

Continua a leggere

Pubblicato in Senza categoria

Hot Toys Batman: Arkham Origins 1/6th scale Deathstroke 12-inch (32cm) Collectible Figure

Pre-order

“It appears the game is over before it even begins” – Deathstroke

Slade Wilson, AKA Deathstroke, is one of the world’s greatest and most deadly mercenaries in DC Comics. He was part of an experimental super-soldier program which allowed him to gain metahuman strength, speed, and healing abilities. Today, Hot Toys is very excited to officially introduce the 1/6th scale collectible figure of Deathstroke inspired by the designs from the highly acclaimed video game Batman: Arkham Origins.

The collectible is expertly crafted based on the appearance of Deathstroke from the game featuring a newly developed helmeted head sculpt with one-eyed mask, a meticulously tailored multi-layer Deathstroke suit with battle damage and weathering effect, an interchangeable battle damaged chest armor plate specially designed for the alternative katana holding pose, an array of detailed weapons including Deathstroke’s ballistic staff handle with interchangeable ends displaying different battle modes, a remote claw, a pistol, a katana, two grenades, several detachable bullets off from the shoulder armor and a figure stand with specially designed backdrop!

Hot Toys VGM30 Batman: Arkham Origins 1/6th scale Deathstroke Collectible Figure specially features: Authentic and detailed likeness of Deathstroke in Batman: Arkham Origins game | newly developed Deathstroke helmeted head sculpt with one-eyed mask | Approximately 32cm tall Newly developed specialized muscular body with over 30 points of articulations | interchangeable battle damaged chest armor plate | Nine (9) pieces of interchangeable gloved hands including: pair of fists, pair of relaxed hands, pair of hands for holding remote claw, pair of hands for holding katana, energy staff holding left hand

Pre-order

Scroll down to see the rest of the pictures.
Click on them for bigger and better views.

Costume: newly designed and greatly detailed orange and blue – colored battled damaged Deathstroke armor with weathering effect, silver-colored under shirt with scale-pattern, black-colored pants, black-colored leather-like utility belt with katana sheath and pouches, black-colored leather-like belt with pouches around the belt and pistol holster on thigh, thigh holster with pouch, black-colored boots

Weapons: ballistic staff handle with two (2) sets of interchangeable ends (collapsed and expended), remote claw with articulated claw, pistol, katana, Two (2) grenades, Seven (7) bullets attachable to left shoulder armor

Accessory: Specially designed figure stand with game logo, character nameplate and backdrop

Pre-order

Continua a leggere

Pubblicato in Senza categoria

Scalable Deep Reinforcement Learning for Robotic Manipulation

Posted Alex Irpan, Software Engineer, Google Brain Team and Peter Pastor, Senior Roboticist, X

How can robots acquire skills that generalize effectively to diverse, real-world objects and situations? While designing robotic systems that effectively perform repetitive tasks in controlled environments, like building products on an assembly line, is fairly routine, designing robots that can observe their surroundings and decide the best course of action while reacting to unexpected outcomes is exceptionally difficult. However, there are two tools that can help robots acquire such skills from experience: deep learning, which is excellent at handling unstructured real-world scenarios, and reinforcement learning, which enables longer-term reasoning while exhibiting more complex and robust sequential decision making. Combining these two techniques has the potential to enable robots to learn continuously from their experience, allowing them to master basic sensorimotor skills using data rather than manual engineering.

Designing reinforcement learning algorithms for robot learning introduces its own set of challenges: real-world objects span a wide variety of visual and physical properties, subtle differences in contact forces can make predicting object motion difficult and objects of interest can be obstructed from view. Furthermore, robotic sensors are inherently noisy, adding to the complexity. All of these factors makes it incredibly difficult to learn a general solution, unless there is enough variety in the training data, which takes time to collect. This motivates exploring learning algorithms that can effectively reuse past experience, similar to our previous work on grasping which benefited from large datasets. However, this previous work could not reason about the long-term consequences of its actions, which is important for learning how to grasp. For example, if multiple objects are clumped together, pushing one of them apart (called “singulation”) will make the grasp easier, even if doing so does not directly result in a successful grasp.

Examples of singulation.

To be more efficient, we need to use off-policy reinforcement learning, which can learn from data that was collected hours, days, or weeks ago. To design such an off-policy reinforcement learning algorithm that can benefit from large amounts of diverse experience from past interactions, we combined large-scale distributed optimization with a new fitted deep Q-learning algorithm that we call QT-Opt. A preprint is available on arXiv.

QT-Opt is a distributed Q-learning algorithm that supports continuous action spaces, making it well-suited to robotics problems. To use QT-Opt, we first train a model entirely offline, using whatever data we’ve already collected. This doesn’t require running the real robot, making it easier to scale. We then deploy and finetune that model on the real robot, further training it on newly collected data. As we run QT-Opt, we accumulate more offline data, letting us train better models, which lets us collect better data, and so on.

To apply this approach to robotic grasping, we used 7 real-world robots, which ran for 800 total robot hours over the course of 4 months. To bootstrap collection, we started with a hand-designed policy that succeeded 15-30% of the time. Data collection switched to the learned model when it started performing better. The policy takes a camera image and returns how the arm and gripper should move. The offline data contained grasps on over 1000 different objects.

Some of the training objects used.

In the past, we’ve seen that sharing experience across robots can accelerate learning. We scaled this training and data gathering process to ten GPUs, seven robots, and many CPUs, allowing us to collect and process a large dataset of over 580,000 grasp attempts. At the end of this process, we successfully trained a grasping policy that runs on a real world robot and generalizes to a diverse set of challenging objects that were not seen at training time.

Seven robots collecting grasp data.

Quantitatively, the QT-Opt approach succeeded in 96% of the grasp attempts across 700 trial grasps on previously unseen objects. Compared to our previous supervised-learning based grasping approach, which had a 78% success rate, our method reduced the error rate by more than a factor of five.

The objects used at evaluation time. To make the task challenging, we aimed for a large variety of object sizes, textures, and shapes.

Notably, the policy exhibits a variety of closed-loop, reactive behaviors that are often not found in standard robotic grasping systems:

  • When presented with a set of interlocking blocks that cannot be picked up together, the policy separates one of the blocks from the rest before picking it up.
  • When presented with a difficult-to-grasp object, the policy figures out it should reposition the gripper and regrasp it until it has a firm hold.
  • When grasping in clutter, the policy probes different objects until the fingers hold one of them firmly, before lifting.
  • When we perturbed the robot by intentionally swatting the object out of the gripper — something it had not seen during training — it automatically repositioned the gripper for another attempt.

Crucially, none of these behaviors were engineered manually. They emerged automatically from self-supervised training with QT-Opt, because they improve the model’s long-term grasp success.

Examples of the learned behaviors. In the left GIF, the policy corrects for the moved ball. In the right GIF, the policy tries several grasps until it succeeds at picking up the tricky object.

Additionally, we’ve found that QT-Opt reaches this higher success rate using less training data, albeit with taking longer to converge. This is especially exciting for robotics, where the bottleneck is usually collecting real robot data, rather than training time. Combining this with other data efficiency techniques (such as our prior work on domain adaptation for grasping) could open several interesting avenues in robotics. We’re also interested in combining QT-Opt with recent work on learning how to self-calibrate, which could further improve the generality.

Overall, the QT-Opt algorithm is a general reinforcement learning approach that’s giving us good results on real world robots. Besides the reward definition, nothing about QT-Opt is specific to robot grasping. We see this as a strong step towards more general robot learning algorithms, and are excited to see what other robotics tasks we can apply it to. You can learn more about this work in the short video below.

Acknowledgements
This research was conducted by Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, and Sergey Levine. We’d also like to give special thanks to Iñaki Gonzalo and John-Michael Burke for overseeing the robot operations, Chelsea Finn, Timothy Lillicrap, and Arun Nair for valuable discussions, and other people at Google and X who’ve contributed their expertise and time towards this research. A preprint is available on arXiv.

Continua a leggere

Pubblicato in Senza categoria

Self-Supervised Tracking via Video Colorization

Posted by Carl Vondrick, Research Scientist, Machine Perception

Tracking objects in video is a fundamental problem in computer vision, essential to applications such as activity recognition, object interaction, or video stylization. However, teaching a machine to visually track objects is challenging partly because it requires large, labeled tracking datasets for training, which are impractical to annotate at scale.

In “Tracking Emerges by Colorizing Videos”, we introduce a convolutional network that colorizes grayscale videos, but is constrained to copy colors from a single reference frame. In doing so, the network learns to visually track objects automatically without supervision. Importantly, although the model was never trained explicitly for tracking, it can follow multiple objects, track through occlusions, and remain robust over deformations without requiring any labeled training data.

Example tracking predictions on the publicly-available, academic dataset DAVIS 2017. After learning to colorize videos, a mechanism for tracking automatically emerges without supervision. We specify regions of interest (indicated by different colors) in the first frame, and our model propagates it forward without any additional learning or supervision.

Learning to Recolorize Video
Our hypothesis is that the temporal coherency of color provides excellent large-scale training data for teaching machines to track regions in video. Clearly, there are exceptions when color is not temporally coherent (such as lights turning on suddenly), but in general color is stable over time. Furthermore, most videos contain color, providing a scalable self-supervised learning signal. We decolor videos, and then add the colorization step because there may be multiple objects with the same color, but by colorizing we can teach machines to track specific objects or regions.

In order to train our system, we use videos from the Kinetics dataset, which is a large public collection of videos depicting everyday activities. We convert all video frames except the first frame into gray-scale, and train a convolutional network to predict the original colors in the subsequent frames. We expect the model to learn to follow regions in order to accurately recover the original colors. Our main observation is the need to follow objects for colorization will cause a model for object tracking to be automatically learned.

We illustrate the video recolorization task using video from the DAVIS 2017 dataset. The model receives as input one color frame and a gray-scale video, and predicts the colors for the rest of the video. The model learns to copy colors from the reference frame, which enables a mechanism for tracking to be learned without human supervision.

Learning to copy colors from the single reference frame requires the model to learn to internally point to the right region in order to copy the right colors. This forces the model to learn an explicit mechanism that we can use for tracking. To see how the video colorization model works, we show some predicted colorizations from videos in the Kinetics dataset below.

Examples of predicted colors from colorized reference frame applied to input video using the publicly-available Kinetics dataset.

Although the network is trained without ground-truth identities, our model learns to track any visual region specified in the first frame of a video. We can track outlined objects or a single point in the video. The only change we make is that, instead of propagating colors throughout the video, we now propagate labels representing the regions of interest.

Analyzing the Tracker
Since the model is trained on large amounts of unlabeled video, we want to gain insight into what the model learns. The videos below show a standard trick to visualize the embeddings learned by our model by projecting them down to three dimensions using Principal Component Analysis (PCA) and plotting it as an RGB movie. The results show that nearest neighbors in the learned embedding space tend to correspond to object identity, even over deformations and viewpoint changes.

Top Row: We show videos from the DAVIS 2017 dataset. Bottom Row: We visualize the internal embeddings from the colorization model. Similar embeddings will have a similar color in this visualization. This suggests the learned embedding is grouping pixels by object identity.

Tracking Pose
We found the model can also track human poses given key-points in an initial frame. We show results on the publicly-available, academic dataset JHMDB where we track a human joint skeleton.

Examples of using the model to track movements of the human skeleton. In this case the input was a human pose for the first frame and subsequent movement is automatically tracked. The model can track human poses even though it was never explicitly trained for this task.

While we do not yet outperform heavily supervised models, the colorization model learns to track video segments and human pose well enough to outperform the latest methods based on optical flow. Breaking down performance by motion type suggests that our model is a more robust tracker than optical flow for many natural complexities, such as dynamic backgrounds, fast motion, and occlusions. Please see the paper for details.

Future Work
Our results show that video colorization provides a signal that can be used for learning to track objects in videos without supervision. Moreover, we found that the failures from our system are correlated with failures to colorize the video, which suggests that further improving the video colorization model can advance progress in self-supervised tracking.

Acknowledgements
This project was only possible thanks to several collaborations at Google. The core team includes Abhinav Shrivastava, Alireza Fathi, Sergio Guadarrama and Kevin Murphy. We also thank David Ross, Bryan Seybold, Chen Sun and Rahul Sukthankar.

Continua a leggere

Pubblicato in Senza categoria

Chronicle Collectibles 22-inch Terminator Genisys Guardian Quarter Scale Figure

Pre-order

“I’ve been trying to teach him to blend in…I know it needs work.”

Sideshow and Chronicle Collectibles are excited to announce the battle-damaged Terminator Genisys Quarter Scale Guardian. This quarter scale figure portrays Sarah Connor’s Guardian from the newest timeline in the Terminator franchise.

The Guardian comes with two interchangeable hands and sports a cloth jacket. The first hand is equipped with a rifle that he uses in the film. The second hand will feature the magnetic rings he uses to fight the T-3000 at the end of the film.

Chronicle Collectibles 22-inch Terminator Genisys Guardian Quarter Scale Figure specially features: Light up eyes, Powered by 2-AAA batteries (batteries not included) | cloth jacket | Two (2) interchangable hands: One (1) hand is equipped with a rifle from the film, One (1) hand features the magnetic rings used to fight the T-3000

Pre-order

Scroll down to see all the pictures.
Click on them for bigger and better views.

Pre-order

Continua a leggere

Pubblicato in Senza categoria

Pre-order Sideshow Collectibles 20-inch Tall Obi-Wan Kenobi Premium Format™ Figure

Pre-order

“I was once a Jedi knight, the same as your father.”

Sideshow is proud to present the Obi-Wan Kenobi Premium Format™ Figure.

The Obi-Wan Kenobi Premium Format™ Figure measures 20” tall, standing on a polystone base inspired by the events in Star Wars: A New Hope. Obi-Wan has a detailed portrait sculpted in the likeness of Sir Alec Guinness as the old and stoic Jedi knight.

The Obi-Wan Kenobi Premium Format™ Figure wears a custom-tailored fabric costume, which captures the elegance of the Jedi’s robes, with details of aging and wear from his many years on Tatooine. His brown Jedi cloak has wiring to allow for dynamic posing.

Obi-Wan has a polystone body and the figure includes two different right arms for multiple display options. Show the Jedi knight reaching for his lightsaber hilt, which can be hung from his belt, or display him brandishing a blue saber blade to show that scum and villainy don’t stand a chance around old Ben Kenobi.

Pre-order

Scroll down to see the rest of the pictures.
Click on them for bigger and better views.

The Exclusive Edition of the Obi-Wan Kenobi Premium Format™ Figure includes Ponda Baba’s severed arm and his blaster as accessories for the base, so you can recreate a legendary movie moment in your collection.

Pre-order

Continua a leggere

Pubblicato in Senza categoria

Hot Toys 2018 Toy Fair Exclusive: 1/6th scale Neon Tech Iron Man Mark IV Collectible Figure

Iron Man has always been the fan favorite armored super hero in the Marvel Cinematic Universe! As a special edition release, Hot Toys has pushed the boundaries of possibility and re-imagines the tech-forward suit of armors in bright neon color with scenes of electro futurism inspired by modern computer technology. Based on the blueprint of Mark IV, the newly developed Neon Tech Iron Man demonstrates not only the harmonious interaction between sophisticated device and advanced weapon, but also the visually striking armor that reflects the passion and craft of neon culture.

Today, Hot Toys is thrilled to officially introduce the brand new 1/6th scale collectible figure of Neon Tech Iron Man Mark IV that showcases the incredible talented masterpiece with the magical medium of neon. This high-arsenal suit which incorporated luminous reflective element is a Toy Fair Exclusive item only available in selected markets!

Crafted with phenomenal details that astonish fans with its high level of authenticity, the over 32cm tall diecast Neon Tech Iron Man Mark IV collectible figure has an array of features including movie-accurate proportion and highly detailed armor design with fully enhanced articulations, specially applied shiny black and teal colored armor with luminous reflective patterns appearing under specialized LED light, LED light-up functions on eyes, palms, lower chest and forearms, two sets of interchangeable forearm armor, a pair of attachable lasers, a specially designed hexagonal figure stand with graphic card.

Scroll down to see all the pictures.
Click on them for bigger and better views.

Hot Toys MMS485D24 1/6th scale Neon Tech Iron Man Mark IV Collectible Figure specially features: Authentic and detailed likeness of Iron Man Mark IV in Iron Man 2 | helmeted head with LED light-up function (white light, battery operated) | Movie-accurate proportion and highly detailed armor design | Shiny black and teal colored armor with luminous reflective patterns appearing under specialized LED light unit | Approximately 32 cm tall with over 30 points of enhanced articulations | Contains die-cast material

Special features on armor: LED-lighted circle-shaped Arc Reactor on chest (white light, battery operated) | LED lights can shine through sides of ribs and forearms (blue light, battery operated) | pair of detachable shoulder mounted weapons | interchangeable chest armor | Two (2) sets of interchangeable forearm armor (normal and missile firing) | pair of built-in shoulder missile launchers | Eight (8) pieces of interchangeable hands including: pair of fists, pair of hands with articulated fingers and light-up repulsors (white light, battery operated), pair of repulsor firing hands (white light, battery operated), pair of laser firing hands | Articulated flaps on back of the armor on both legs | Fully deployable air flaps at back of the armor | Multi-layered waist armor with enhanced articulations allowing highly flexible movement

Accessories: pair of attachable real-like blue-colored lasers accessories, Specially designed hexagonal figure stand with character nameplate and graphic card

Release date: Approximately Q3 – Q4, 2018

Continua a leggere

Pubblicato in Senza categoria