Eager Execution: An imperative, define-by-run interface to TensorFlow

Posted by Asim Shankar and Wolff Dobson, Google Brain Team

Today, we introduce eager execution for TensorFlow. Eager execution is an imperative, define-by-run interface where operations are executed immediately as they are called from Python. This makes it easier to get started with TensorFlow, and can make research and development more intuitive.

The benefits of eager execution include:

  • Fast debugging with immediate run-time errors and integration with Python tools
  • Support for dynamic models using easy-to-use Python control flow
  • Strong support for custom and higher-order gradients
  • Almost all of the available TensorFlow operations

Eager execution is available now as an experimental feature, so we’re looking for feedback from the community to guide our direction.

To understand this all better, let’s look at some code. This gets pretty technical; familiarity with TensorFlow will help.

Using Eager Execution

When you enable eager execution, operations execute immediately and return their values to Python without requiring a Session.run(). For example, to multiply two matrices together, we write this:

import tensorflow as tf
import tensorflow.contrib.eager as tfe


x = [[2.]]
m = tf.matmul(x, x)

It’s straightforward to inspect intermediate results with print or the Python debugger.

# The 1x1 matrix [[4.]]

Dynamic models can be built with Python flow control. Here’s an example of the Collatz conjecture using TensorFlow’s arithmetic operations:

a = tf.constant(12)
counter = 0
while not tf.equal(a, 1):
if tf.equal(a % 2, 0):
a = a / 2
a = 3 * a + 1

Here, the use of the tf.constant(12) Tensor object will promote all math operations to tensor operations, and as such all return values with be tensors.


Most TensorFlow users are interested in automatic differentiation. Because different operations can occur during each call, we record all forward operations to a tape, which is then played backwards when computing gradients. After we’ve computed the gradients, we discard the tape.

If you’re familiar with the autograd package, the API is very similar. For example:

def square(x):
return tf.multiply(x, x)

grad = tfe.gradients_function(square)

print(square(3.)) # [9.]
print(grad(3.)) # [6.]

The gradients_function call takes a Python function square() as an argument and returns a Python callable that computes the partial derivatives of square() with respect to its inputs. So, to get the derivative of square() at 3.0, invoke grad(3.0), which is 6.

The same gradients_function call can be used to get the second derivative of square:

gradgrad = tfe.gradients_function(lambda x: grad(x)[0])

print(gradgrad(3.)) # [2.]

As we noted, control flow can cause different operations to run, such as in this example.

def abs(x):
return x if x > 0. else -x

grad = tfe.gradients_function(abs)

print(grad(2.0)) # [1.]
print(grad(-2.0)) # [-1.]

Custom Gradients

Users may want to define custom gradients for an operation, or for a function. This may be useful for multiple reasons, including providing a more efficient or more numerically stable gradient for a sequence of operations.

Here is an example that illustrates the use of custom gradients. Let’s start by looking at the function log(1 + ex), which commonly occurs in the computation of cross entropy and log likelihoods.

def log1pexp(x):
return tf.log(1 + tf.exp(x))
grad_log1pexp = tfe.gradients_function(log1pexp)

# The gradient computation works fine at x = 0.
# [0.5]
# However it returns a `nan` at x = 100 due to numerical instability.
# [nan]

We can use a custom gradient for the above function that analytically simplifies the gradient expression. Notice how the gradient function implementation below reuses an expression (tf.exp(x)) that was computed during the forward pass, making the gradient computation more efficient by avoiding redundant computation.

def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
grad_log1pexp = tfe.gradients_function(log1pexp)

# Gradient at x = 0 works as before.
# [0.5]
# And now gradient computation at x=100 works as well.
# [1.0]

Building models

Models can be organized in classes. Here’s a model class that creates a (simple) two layer network that can classify the standard MNIST handwritten digits.

class MNISTModel(tfe.Network):
def __init__(self):
super(MNISTModel, self).__init__()
self.layer1 = self.track_layer(tf.layers.Dense(units=10))
self.layer2 = self.track_layer(tf.layers.Dense(units=10))
def call(self, input):
"""Actually runs the model."""
result = self.layer1(input)
result = self.layer2(result)
return result

We recommend using the classes (not the functions) in tf.layers since they create and contain model parameters (variables). Variable lifetimes are tied to the lifetime of the layer objects, so be sure to keep track of them.

Why are we using tfe.Network? A Network is a container for layers and is a tf.layer.Layer itself, allowing Network objects to be embedded in other Network objects. It also contains utilities to assist with inspection, saving, and restoring.

Even without training the model, we can imperatively call it and inspect the output:

# Let's make up a blank input image
model = MNISTModel()
batch = tf.zeros([1, 1, 784])
# (1, 1, 784)
result = model(batch)
# tf.Tensor([[[ 0. 0., ...., 0.]]], shape=(1, 1, 10), dtype=float32)

Note that we do not need any placeholders or sessions. The first time we pass in the input, the sizes of the layers’ parameters are set.

To train any model, we define a loss function to optimize, calculate gradients, and use an optimizer to update the variables. First, here’s a loss function:

def loss_function(model, x, y):
y_ = model(x)
return tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_)

And then, our training loop:

optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
for (x, y) in tfe.Iterator(dataset):
grads = tfe.implicit_gradients(loss_function)(model, x, y)

implicit_gradients() calculates the derivatives of loss_function with respect to all the TensorFlow variables used during its computation.

We can move computation to a GPU the same way we’ve always done with TensorFlow:

with tf.device("/gpu:0"):
for (x, y) in tfe.Iterator(dataset):
optimizer.minimize(lambda: loss_function(model, x, y))

(Note: We’re shortcutting storing our loss and directly calling the optimizer.minimize, but you could also use the apply_gradients() method above; they are equivalent.)

Using Eager with Graphs

Eager execution makes development and debugging far more interactive, but TensorFlow graphs have a lot of advantages with respect to distributed training, performance optimizations, and production deployment.

The same code that executes operations when eager execution is enabled will construct a graph describing the computation when it is not. To convert your models to graphs, simply run the same code in a new Python session where eager execution hasn’t been enabled, as seen, for example, in the MNIST example. The value of model variables can be saved and restored from checkpoints, allowing us to move between eager (imperative) and graph (declarative) programming easily. With this, models developed with eager execution enabled can be easily exported for production deployment.

In the near future, we will provide utilities to selectively convert portions of your model to graphs. In this way, you can fuse parts of your computation (such as internals of a custom RNN cell) for high-performance, but also keep the flexibility and readability of eager execution.

How does my code change?

Using eager execution should be intuitive to current TensorFlow users. There are only a handful of eager-specific APIs; most of the existing APIs and operations work with eager enabled. Some notes to keep in mind:

  • As with TensorFlow generally, we recommend that if you have not yet switched from queues to using tf.data for input processing, you should. It’s easier to use and usually faster. For help, see this blog post and the documentation page.
  • Use object-oriented layers, like tf.layer.Conv2D() or Keras layers; these have explicit storage for variables.
  • For most models, you can write code so that it will work the same for both eager execution and graph construction. There are some exceptions, such as dynamic models that use Python control flow to alter the computation based on inputs.
  • Once you invoke tfe.enable_eager_execution(), it cannot be turned off. To get graph behavior, start a new Python session.

Getting started and the future

This is still a preview release, so you may hit some rough edges. To get started today:

There’s a lot more to talk about with eager execution and we’re excited… or, rather, we’re eager for you to try it today! Feedback is absolutely welcome.

Continua a leggere

Pubblicato in Senza categoria

Closing the Simulation-to-Reality Gap for Deep Robotic Learning

Posted by Konstantinos Bousmalis, Senior Research Scientist, and Sergey Levine, Faculty Advisor, Google Brain Team

Each of us can learn remarkably complex skills that far exceed the proficiency and robustness of even the most sophisticated robots, when it comes to basic sensorimotor skills like grasping. However, we also draw on a lifetime of experience, learning over the course of multiple years how to interact with the world around us. Requiring such a lifetime of experience for a learning-based robot system is quite burdensome: the robot would need to operate continuously, autonomously, and initially at a low level of proficiency before it can become useful. Fortunately, robots have a powerful tool at their disposal: simulation.

Simulating many years of robotic interaction is quite feasible with modern parallel computing, physics simulation, and rendering technology. Moreover, the resulting data comes with automatically-generated annotations, which is particularly important for tasks where success is hard to infer automatically. The challenge with simulated training is that even the best available simulators do not perfectly capture reality. Models trained purely on synthetic data fail to generalize to the real world, as there is a discrepancy between simulated and real environments, in terms of both visual and physical properties. In fact, the more we increase the fidelity of our simulations, the more effort we have to expend in order to build them, both in terms of implementing complex physical phenomena and in terms of creating the content (e.g., objects, backgrounds) to populate these simulations. This difficulty is compounded by the fact that powerful optimization methods based on deep learning are exceptionally proficient at exploiting simulator flaws: the more powerful the machine learning algorithm, the more likely it is to discover how to “cheat” the simulator to succeed in ways that are infeasible in the real world. The question then becomes: how can a robot utilize simulation to enable it to perform useful tasks in the real world?

The difficulty of transferring simulated experience into the real world is often called the “reality gap.” The reality gap is a subtle but important discrepancy between reality and simulation that prevents simulated robotic experience from directly enabling effective real-world performance. Visual perception often constitutes the widest part of the reality gap: while simulated images continue to improve in fidelity, the peculiar and pathological regularities of synthetic pictures, and the wide, unpredictable diversity of real-world images, makes bridging the reality gap particularly difficult when the robot must use vision to perceive the world, as is the case for example in many manipulation tasks. Recent advances in closing the reality gap with deep learning in computer vision for tasks such as object classification and pose estimation provide promising solutions.  For example,  Shrivastava et al. and Bousmalis et al. explored pixel-level domain adaptation. Ganin et al. and Bousmalis and Trigeorgis et al. focus on feature-level domain adaptation. These advances required a rethinking of the approaches used to solve the simulation-to-reality domain shift problem for robotic manipulation as well. Although a number of recent works have sought to address the reality gap in robotics, through techniques such as machine learning-based domain adaptation (Tzeng et al.) and randomization of simulated environments (Sadeghi and Levine), effective transfer in robotic manipulation has been limited to relatively simple tasks, such as grasping rectangular, brightly-colored objects (Tobin et al. and James et al.) and free-space motion (Christiano et al.).  In this post, we describe how learning in simulation, in our case PyBullet, and using domain adaptation methods such as machine learning methods that deal with the simulation-to-reality domain shift, can accelerate learning of robotic grasping in the real world. This approach can enable real robots to grasp large of variety physical objects, unseen during training, with a high degree of proficiency.

The performance effect of using 8 million simulated samples of procedural objects with no randomization and various amounts of real data.

Before we consider introducing simulated experience, what does it take for our robots to learn to reliably grasp such not-before-seen objects with only real-world experience? In a previous post, we discussed how the Google Brain team and X’s robotics teams teach robots how to grasp a variety of ordinary objects by just using images from a single monocular camera. It takes tens to hundreds of thousands of grasp attempts, the equivalent of thousands of robot-hours of real-world experience. Although distributing the learning across multiple robots expedites this, the realities of real-world data collection, including maintenance and wear-and-tear, mean that these kinds of data collection efforts still take a significant amount of real time. As mentioned above, an appealing alternative is to use off-the-shelf simulators and learn basic sensorimotor skills like grasping in a virtual environment. Training a robot how to grasp in simulation can be parallelized easily over any number of machines, and can provide large amounts of experience in dramatically less time (e.g., hours rather than months) and at a fraction of the cost.

If the goal is to bridge the reality gap for vision-based robotic manipulation, we must answer a few critical questions. First, how do we design simulation so that simulated experience appears realistic to a neural network? And second, how should we integrate simulated and real experience in a way that maximizes transfer to the real world? We studied these questions in the context of a particularly challenging and important robotic manipulation task: vision-based grasping of diverse objects. We extensively evaluated the effect of various simulation design decisions in combination with various techniques for integrating simulated and real experience for maximal performance.

The setup we used for collecting the simulated and real-world datasets.

Images used during training of simulated grasping experience with procedurally generated objects (left) and of real-world experience with a varied collection of everyday physical objects (right). In both cases, we see pairs of image inputs with and without the robot arm present.

When it comes to simulation, there are a number of choices we have to make: the type of objects to use for simulated grasping, whether to use appearance and/or dynamics randomization, and whether to extract any additional information from the simulator that could aid adaptation to the real world. The types of objects we use in simulation is a particularly important one. A question that comes naturally is: how realistic do the objects used in simulation need to be? Using randomly generated procedural objects is the most desirable choice, because these objects are generated effortlessly on demand, and are easy to parameterize if we change the requirements of the task. However, they are not realistic and one could imagine they might not be useful for transferring the experience of grasping them to the real world. Using realistic 3D object models from a publicly available model library, such as the widely used ShapeNet, is another choice, which however restricts our findings to be related to the characteristics of the specific models we are using. In this work, we compared the effect of using procedurally-generated and realistic objects from the ShapeNet model repository, and found that simply using random objects generated programmatically was not just sufficient for efficient experience transfer from simulation to reality, but also generalized better to the real world than using ShapeNet ones.

Some of the procedurally-generated objects used in simulation.

Some of the ShapeNet objects used in simulation.

Some of the physical objects used to collect real grasping experience.

Another decision about our simulated environment has to do with the randomization of the simulation. Simulation randomization has shown promise in providing generalization to real-world environments in previous work. We further evaluate randomization as a way to provide generalization by separately evaluating the effect of using appearance randomization (randomly changing textures of different visual components of the virtual environment), and dynamics randomization (randomly changing object mass, and friction properties). For our task, visual randomization had a positive effect when we did not use domain adaptation methods to aid with generalization, and had no effect when we included domain adaptation. Using dynamics randomization did not show a significant improvement for this particular task, however it is possible that dynamics randomization might be more relevant in other tasks. These results suggest that, although randomization can be an important part of simulation-to-real-world transfer, the inclusion of effective domain adaptation can have a substantially more pronounced impact for vision-based manipulation tasks.

Appearance randomization in simulation.

Finally, the information we choose to extract and use for our domain adaptation methods has a significant impact on performance. In one of our proposed methods, we utilize the extracted semantic map of the simulated image, ie the description of each pixel in the simulated image, and use it to ground our proposed domain adaptation approach to produce semantically-meaningful realistic samples, as we discuss below.

Our main proposed approach to integrating simulated and real experience, which we call GraspGAN, takes as input synthetic images generated by a simulator, along with their semantic maps, and produces adapted images that look similar to real-world ones. This is possible with adversarial training, a powerful idea proposed by Goodfellow et al. In our framework, a convolutional neural network, the generator, takes as input synthetic images and generates images that another neural network, the discriminator, cannot distinguish from actual real images. The generator and discriminator networks are trained simultaneously and improve together, resulting in a generator that can produce images that are both realistic and useful for learning a grasping model that will generalize to the real world. One way to make sure that these images are useful is the use of the semantic maps of the synthetic images to ground the generator. By using the prediction of these masks as an auxiliary task, the generator is encouraged to produce meaningful adapted images that correspond to the original label attributed to the simulated experience. We train a deep vision-based grasping model with both visually-adapted simulated and real images, and attempt to account for the domain shift further by using a feature-level domain adaptation technique which helps produce a domain-invariant model. See below the GraspGAN adapting simulated images to realistic ones and a semantic map it infers.

By using synthetic data and domain adaptation we are able to reduce the number of real-world samples required to achieve a given level of performance by up to 50 times, using only randomly generated objects in simulation. This means that we have no prior information about the objects in the real world, other than pre-specified size limits for the graspable objects. We have shown that we are able to increase performance with various amounts of real-world data, and also that by using only unlabeled real-world data and our GraspGAN methodology, we obtain real-world grasping performance without any real-world labels that is similar to that achieved with hundreds of thousands of labeled real-world samples. This suggests that, instead of collecting labeled experience, it may be sufficient in the future to simply record raw unlabeled images, use them to train a GraspGAN model, and then learn the skills themselves in simulation.

Although this work has not addressed all the issues around closing the reality gap, we believe that our results show that using simulation and domain adaptation to integrate simulated and real robotic experience is an attractive choice for training robots. Most importantly, we have extensively evaluated the performance gains for different available amounts of labeled real-world samples, and for the different design choices for both the simulator and the domain adaptation methods used. This evaluation can hopefully serve as a guide for practitioners to use for their own design decisions and for weighing the advantages and disadvantages of incorporating such an approach in their experimental design.

This research was conducted by K. Bousmalis, A. Irpan, P. Wohlhart, Y. Bai, M, Kelcey, M. Kalakrishnan, L. Downs, J. Ibarz, P. Pastor, K. Konolige, S. Levine, V. Vanhoucke, with special thanks to colleagues at Google Research and X who’ve contributed their expertise and time to this research. An early preprint is available on arXiv.

The collection of procedurally-generated objects we used in simulation was made publicly available here by Laura Downs.

Continua a leggere

Pubblicato in Senza categoria

Lots of Light but with Dark Frames

I’m a big trend guy and one that I have been following closely is the interior office space.  A lot has been covered with regards to the increase in glass usage but I’ve been surprised by the framing choices.  Dark is in.  So black, dark bronze or even a custom coated choice that trends darker are starting to pick up more attention vs. the traditional mill/satin look.  In Europe the darker look has been the play for a while, but its interesting to see that its now hit North America.  In the end I don’t really care what color the framing is as long as glass is being used.  We have a great building product that is surely not just for the exterior of the office building anymore.


–  Building off last weeks note on the Silica Rule, OSHA did put out a new memo on the process but it’s still not that simple, streamlined piece we need for the industry.  I am still gathering info and if you or your company are doing anything with regards to this please considering sharing with me so not only I can learn but I can share with the readers too.
–  Have you seen the NGA-GANA FAQ’s?  They give some excellent insight into the process.  This is such an important move for the industry- I am just excited to see it continue on the right path!

–  Time for this months Glass Magazine review… I loved the look, layout and copy with the GlassBuild review.  It captured so much of the flavor of the event.  Glass Magazine has been very active with articles on the workforce and this issue has a dandy on workforce development that featured a ton of best practices examples.  This sort of content is so incredibly valuable if you are running a business.  Add in the rest of the issue that had insights on codes, unitized and technology (among other articles) and this was a fabulous issue for insight and education!

–  The ad of the month I will hit on next weeks post.

–  Last week I noted the Amazon HQ2 competition.  Well 238 cities applied for it and now the predictions are starting to fly on which city may win… Moody’s came out with their top 10… and since I love lists… here’s who they think with some comments from me:

1. Austin, TX  - Hot city, everyone seemingly loves Austin these days that’s for sure.

2. Atlanta, GA- They have the space but that bad traffic will now get even worse.

3. Philadelphia, PA- This is surprising to me, it won’t be cheap or easy to be here with the size that Amazon wants- but great location in the east.

4. Rochester, NY – I know a manufacturers rep who would have a field day with this

5. Pittsburgh, PA – My old hometown has grown like crazy since I left- and now in the running for this?  I must’ve been the one holding the Steel City back.

6. New York City, NY- I will be stunned if this happens

7. Miami, FL- And stunned here as well- great weather but a nightmare to ship from

8. Portland, OR- Why?  HQ1 is in Seattle, why would they come down the road for HQ2?

9. Boston, MA – Great location and food but similar to my thoughts on NY- would be a stunner.

10. Salt Lake City, UT- Great place but seemingly too far west if you want to spread out HQ’s.

My prediction?  I think it’s going to be Dallas with Atlanta a major possibility.  We will see…. And we’ll also see how this new HQ affects all of the other businesses in that area that utilizes the same workforce.


This was an incredible story- women lost at sea for 5 months!!!  Hopefully it gets some play in the news, but knowing our media it won’t.
This is the worst idea ever. Ever.

I found these top 10 fails on Wheel of Fortune quite humorous.  But I know if I ever went on this show I’d probably miss major easy ones too!

Continua a leggere

Pubblicato in Senza categoria

Black Box Toys 1/6th scale Spectre Girl aka Léa Seydoux as Dr. Madeleine Swann in Spectre

Spectre (2015) is the twenty-fourth spy film in the James Bond film series produced by Eon Productions for Metro-Goldwyn-Mayer and Columbia Pictures. It is Daniel Craig’s fourth performance as James Bond, and the second film in the series directed by Sam Mendes following Skyfall. The story sees Bond pitted against the global criminal organisation Spectre and their leader Ernst Stavro Blofeld. Bond attempts to thwart Blofeld’s plan to launch a global surveillance network, and discovers Spectre and Blofeld were behind the events of the previous three films.

Léa Seydoux stars as Dr. Madeleine Swann, a psychologist working at a private medical clinic in the Austrian Alps, and the daughter of Mr. White (last seen in Casino Royale and Quantum of Solace, played by Jesper Christensen).

Black Box Toys 1/6th scale Spectre Girl will come with 1/6th scale “Swann” female head sculpt, silk dress, and PPK Pistol. NOTE: 12-inch figure body and high heels not included!

This offering looks more like a doll than an action figure. Is there a difference? Let’s just say “action figures” are for boys and “dolls” are for girls haha. The term “action figure” was first coined by Hasbro in 1964, to market their G.I. Joe figure to boys who wouldn’t play with dolls. And I’ve said time and again: I collect Action Figures! – see my toy blog post HERE

Back to this offering by Blackbox Toys. It’s weird that they did NOT include the high heels for this figure / doll. Did they expect her to walk around back feet?

Scroll down to see the rest of the pictures.
Click on them for bigger and better views.

Related posts:
Blackbox Toys 1/6th scale Daniel Craig as 007 James Bond in Spectre 12-inch figure Review posted on my toy blog HERE and HERE
MC TOYS 1/6th scale James Bond Austrian Action outfit from “Spectre” – keeping Craig warm (preview pics HERE)
The 007 James Bond Car Collection 1:43 scale Lotus Esprit from “The Spy Who Loved Me” posted HERE

Continua a leggere

Pubblicato in Senza categoria

Check out this Star Ace 1/6th scale 300 Lena Headey as Queen Gorgo 12" Collectible figure


“Spartan! Come back with your shield, or on it.”

300 is a 2006 American epic war film based on the 1998 comic series 300 by Frank Miller and Lynn Varley. Both are fictionalized retellings of the Battle of Thermopylae within the Persian Wars. The plot revolves around King Leonidas (Gerard Butler), who leads 300 Spartans into battle against the Persian “god-King” Xerxes (Rodrigo Santoro) and his invading army of more than 300,000 soldiers. As the battle rages, Queen Gorgo (Lena Headey) attempts to rally support in Sparta for her husband.

Lena Headey stars as Queen Gorgo, Queen of Sparta (Gorgo has a larger role in the film than she does in the comic book, where she only appears in the beginning)

Star Ace today unveils a sixth scale figure from the film that started it all, 300. Presenting Gorgo, Queen of Sparta, wife of King Leonidas. The Star Ace 1/6th scale Queen Gorgo 12-inch Collectible figure features: 1:6th scale body, approximately 29 cm tall with over 30 points of articulation | Fully realized authentic likeness of Lena Headey as Queen Gorgo in the movie “300” with accurate facial expression and detailed skin texture | Each head sculpt is specially hand-painted with a combination of rooted and sculpted hair | Four (4) interchangeable hands including: pair of open hands, right hand for holding sword, left hand for holding shield


Scroll down to see all the pictures.
Click on them for bigger and better views.

COSTUME: white dress, feet with sandals

ACCESSORIES: Sword, Shield, Gold arm band, Gold earrings, Tooth necklace, Plastic stand with waist clip.

This figure is scheduled to ship first quarter 2018.


Continua a leggere

Pubblicato in Senza categoria

Hot Toys Batman: Arkham Knight 1/6th scale Batman (Futura Knight Version) Action Figure


With overwhelming responses from Hot Toys’ last introduction of the 1/6th scale Arkham Knight and Batman figures, we are very excited to revisit the insanely popular Batman: Arkham series by presenting to “Bats” fans a Hot Toys’ own re-imagination of the 2039 style Batman Beyond skin from the Batman: Arkham Knight game and officially introduce as Hot Toys Exclusive – 1/6th scale Batman (Futura Knight Version) collectible figure!

Inspired by the stylish Batman Beyond skin in the video game, the 1/6th scale Batman (Futura Knight Version) features a newly painted head sculpt with red eyes and two interchangeable black colored neutral and fierce expression lower faces, a masterly tailored multi-layer and multi-texture Batsuit with glossy red-colored electroplated Batman logo on the chest armor as well as glossy black and red colored armor plating throughout the body, a new Batman Beyond style Batarang, variety of Batman gadgets including grapnel gun, disruptor, REC gun, freeze grenade and many more!

Hot Toys VGM29 1/6th scale Batman (Futura Knight Version) Collectible Figure’s special features: Batman head with red-colored eyes and the patented Interchangeable Faces Technique (IFT) and two (2) interchangeable black colored lower part of faces capturing Batman’s facial expressions (neutral and fierce) | Inspired by the design of the Batman Beyond skin in Batman: Arkham Knight game | Approximately 33cm tall (Approximately 35cm tall measuring to tips of cowl) Specialized muscular body with over 30 points of articulations | Nine (9) pieces of interchangeable gloved hands including: pair of fists, pair of gripping hands, pair of hands for holding Batarang, pair of weapons or accessories holding hands, open right hand


Scroll down to see the rest of the pictures.
Click on them for bigger and better views.

Costume: highly detailed and meticulously tailored multi-layer and multi-texture Batsuit with a red colored electroplated Batman logo on the chest armor as well as glossy black and red colored armor plating throughout the body | black leather-like cape | red color utility belt | black gauntlets with red colored accents | black boots with red colored accents

Weapons and Gadgets: Two (2) Batarangs, Batman Beyond style Batarang, grapnel gun with interchangeable Batclaw and interchangeable part to become a remote electrical charge gun, explosive gel, disruptor gun, freeze grenade, line launcher, voice synthesizer

Accessories: Specially designed figure stand with game logo and backdrop

Release date: Approximately Q1 – Q2, 2018


Continua a leggere

Pubblicato in Senza categoria