Hacksaw Ridge is a 2016 biographical war drama film directed by Mel Gibson and written by Andrew Knight and Robert Schenkkan, based on the 2004 documentary The Conscientious Objector. The film focuses on the World War II experiences of Desmond Doss, an American pacifist combat medic who, as a Seventh-day Adventist Christian, refused to carry or use a weapon or firearm of any kind. Doss became the first conscientious objector to be awarded the Medal of Honor, for service above and beyond the call of duty during the Battle of Okinawa. Andrew Garfield stars as Doss, with Sam Worthington, Luke Bracey, Teresa Palmer, Hugo Weaving, Rachel Griffiths, and Vince Vaughn in supporting roles.
For those of you who have been clamoring for Army medics, DID has the perfect item for you. This is “Dixon”, a member of the 77th Infantry Division serving as a Combat Medic. Have him take care of all your World War II Allied Forces figures.
You can get this figure from Cotswold Collectibles (link HERE)
DID 1/6th scale WWII 77th Infantry Division Medic “Dixon” 12-inch figure Features: Two super realistic head sculpts (one with scars and weathering), Articulated body, Open palms, Relaxed palms, Palms for holding accessories, Green T shirt, Green uniform, Green pants, Y strap, Belt, Suspender, M1 helmet, Medical Pouch x 2 with Type I insert + Type II insert, Cantle Ring Strap x 2, M2 First Aid kit pouch, M1936 musette bag, Surgical instrument case, First Aid kit pouch, M43 boots, Canteen with cover x 2, Emergency Medical Tag x 5 with booklet, Solution of Morphine Tartrate x 5 （four with covers）with packing box x 3, 0.5 oz Hard Rubber Black Vials x 6, Crystalline Sulfanilamide Paper Envelope x 5 with packing box x 3, Curity adhesive tape, Adhesive Surgical Plaster, Safety Pins x 7 with cardboard, Metal Container, Flask w/Cup, Double-Blunt Scissors, Hemostatic Forceps x 2, Scalpel x 2, Spring Tissue Forceps, Folding Stretcher, Compressed White Bandage x 5 with boxe x 2, Geneva Convention Brassard
Dragon Models Limited (DML) 1/6th scale WWII Combat Medic “Doc Peterson” 12-inch action figure posted on my toy blog HERE
Toy Soldier’s 7th Anniversary figure: USMC Force Recon Rifleman/Corpsman, Vietnam 1970 posted HERE
Andrew Garfield posts:
Kitbash 1/6th scale Andrew Garfield as Peter Parker/Spider-Man 12-inch figure (parts unknown) – pics HERE
I got up close to The Amazing Spider-Man 2 cast members, in Singapore for Earth Hour 2014 posted HERE
In the years after the third Arkhorian War, hordes of pirates, and mercenaries roamed the savage seas, their swords serving no cause but themselves or the highest bidder. Across these bloodstained waters rode Arhian and her lover, Captain Ras, a proud … Continua a leggere
Scale 1:12Foto: Max Moto Modeling Continua a leggere
“I am Batman. This is my city. At night, it belongs to me.”
Criminals of Gotham, beware! Sideshow is proud to present the Batman Premium Format™ Figure.
Batman measures 21 inches tall, perched atop an Arkham cemetery base. Fans familiar with the shaded history of Gotham’s famed asylum may recognize the marks of madness carved into the gravestones beneath Batman’s boots.
The Batman Premium Format™ Figure stands ready to bring justice to his city, wearing an updated version of his iconic costume. The polyresin figure has a sculpted black and grey tactical bodysuit featuring a gold utility belt. Batman’s chest, gauntlets, and boots textured with elements of battle-damage.
The Caped Crusader’s portrait is detailed with a grim expression, framed by his iconic cowl with white eyes. The Batman Premium Format™ Figure also has a tailored fabric cape with internal wiring to allow custom cape poses when displaying the piece.
“Bring balance to the force… not leave it in darkness!”
Legendary Jedi Master, Obi-Wan Kenobi is fans’ beloved character in the Star Wars galaxy! Gifted in the ways of the Force, he fought alongside Qui-Gon Jinn, trained Anakin Skywalker, and served as a general in the Republic Army during the Clone Wars. In Star War Episode III: Revenge of the Sith, fans finally witness the conflict and climatic battle between Obi-Wan and Anakin who had turned to the Dark Side!
Today, following the release of Hot Toys’ 1/6th scale Anakin Skywalker collectible figure, we are very excited to officially introduce the eagerly anticipated a Deluxe Version of 1/6th scale collectible figure of Obi-Wan Kenobi from Star Wars: Episode III Revenge of the Sith!
Sophisticatedly crafted based on the appearance of Obi-Wan Kenobi in the film, the 1/6th scale collectible figure features a newly developed head sculpt with stunning likeness, a skillfully tailored Jedi robe and tunic, a LED light-up lightsaber, severed battle droid parts and a specially designed figure base with interchangeable graphic cards!
Moreover, this Deluxe Version will exclusively include hologram figures of Darth Sidious and Anakin with LED light-up table and a baby Luke Skywalker!
Hot Toys MMS478 Star War Episode III: Revenge of the Sith 1/6th scale Obi-Wan Kenobi (Deluxe Version) Collectible Figure specially features: Authentic and detailed likeness of Ewan McGregor as Obi-Wan Kenobi in Star Wars: Episode III Revenge of the Sith | Movie-accurate facial expression with detailed wrinkles, beard, and skin texture | Approximately 30.5 cm tall Body with over 30 points of articulations | Eight (8) pieces of newly sculpted interchangeable hands including: pair of relaxed hands, pair of open hands, pair of hands for holding lightsaber, Two (2) gesturing left hands
Costume: brown colored under-tunic, beige colored tunic, brown colored Jedi robe, brown leather-like belt, beige-colored pants, brown leather-like boots
Weapons: LED-lighted blue lightsaber (blue light, battery operated), blue lightsaber blade in motion (attachable to the hilt), lightsaber hilt
Accessory: Comlink, Three (3) pieces of security battle droid remains, baby Luke Skywalker***, hologram figure of Anakin Skywalker***, hologram figure of Darth Sidious***, LED – lighted security hologram table**, Interchangeable graphic cards, Specially designed figure stand with Obi-Wan Kenobi nameplate and movie logo
*** Exclusive to Deluxe Version
Release date: Approximately Q1 – Q2, 2019
Posted by Olivier Bousquet, Principal Engineer, Google Zürich
Recently, we announced the launch of a new AI research team in our Paris office. And today DeepMind has also announced a new AI research presence in Paris. We are excited about expanding Google’s research presence in Europe, which bolsters the efforts of the existing groups in our Zürich and London offices. As strong supporters of academic research, we are also excited to foster collaborations with France’s vibrant academic ecosystem.
Our research teams in Paris will focus on fundamental AI research, as well as important applications of these ideas to areas such as Health, Science or Arts. They will publish and open-source their results to advance the state-of-the-art in core areas such as Deep Learning and Reinforcement Learning.
Our approach to research is based on building a strong connection with the academic community; contributing to training the next generation of scientists and establishing a bridge between academic and industrial research. We believe that both objectives are key to fostering a healthy research ecosystem that will flourish in the long term. These ideas are very much aligned with some of the recommendations that Fields Medalist and member of French Parliament Cédric Villani is putting forward in his report on AI to the French government.
As we expand our teams in France, we have several initiatives that illustrate our commitment to these goals:
- We are sponsoring “Artificial Intelligence and Visual Computing” Chair at École Polytechnique (one of the leading higher education institutions in France) which will support their education initiatives in AI
- We just established a partnership with INRIA for conducting collaborative research projects
- We are funding academic research with unrestricted grants mostly dedicated to the support of PhD and postdoc positions through our Faculty Research Awards and PhD Fellowship programs, as well as our Focused Research Awards. As one example, we have recently funded a project on large scale optimization of neural networks led by Francis Bach (INRIA and ENS) and Alexandre d’Aspremont (CNRS and ENS)
- We are working on offering CIFRE PhD positions (joint PhD positions between Google and an academic lab) as well as internships for PhD students
Additionally, we are pleased to announce that one of the world’s leading experts in computer vision, Cordelia Schmid, will begin a dual appointment at INRIA and Google Paris. These kind of appointments, together with our Visiting Faculty program, are a great way to share ideas and research challenges, and utilize Google’s world-class computing infrastructure to explore new projects at industrial scale.
France has a long tradition of research and educational excellence, and has a very dynamic and active machine learning community. This makes it a great place to pursue our goal of building AI-enabled technologies that can benefit everyone, through fundamental advances in machine learning and related fields.
Posted by Irwan Bello, Research Associate, Google Brain Team
Deep learning models have been deployed in numerous Google products, such as Search, Translate and Photos. The choice of optimization method plays a major role when training deep learning models. For example, stochastic gradient descent works well in many situations, but more advanced optimizers can be faster, especially for training very deep networks. Coming up with new optimizers for neural networks, however, is challenging due to to the non-convex nature of the optimization problem. On the Google Brain team, we wanted to see if it could be possible to automate the discovery of new optimizers, in a way that is similar to how AutoML has been used to discover new competitive neural network architectures.
In “Neural Optimizer Search with Reinforcement Learning”, we present a method to discover optimization methods with a focus on deep learning architectures. Using this method we found two new optimizers, PowerSign and AddSign, that are competitive on a variety of different tasks and architectures, including ImageNet classification and Google’s neural machine translation system. To help others benefit from this work we have made the optimizers available in Tensorflow.
Neural Optimizer Search makes use of a recurrent neural network controller which is given access to a list of simple primitives that are typically relevant for optimization. These primitives include, for example, the gradient or the running average of the gradient and lead to search spaces with over 1010 possible combinations. The controller then generates the computation graph for a candidate optimizer or update rule in that search space.
In our paper, proposed candidate update rules (U) are used to train a child convolutional neural network on CIFAR10 for a few epochs and the final validation accuracy (R) is fed as a reward to the controller. The controller is trained with reinforcement learning to maximize the validation accuracies of the sampled update rules. This process is illustrated below.
|An overview of Neural Optimizer Search using an iterative process to discover new optimizers.|
Interestingly, the optimizers we have found are interpretable. For example, in the PowerSign optimizer we are releasing, each update compares the sign of the gradient and its running average, adjusting the step size according to whether those two values agree. The intuition behind this is that if these values agree, one is more confident in the direction of the update, and thus the step size can be larger. We also discovered a simple learning rate decay scheme, linear cosine decay, which we found can lead to faster convergence.
|Graph comparing learning rate decay functions for linear cosine decay, stepwise decay and cosine decay.|
Neural Optimizer Search found several optimizers that outperform commonly used optimizers on the small ConvNet model. Among the ones that transfer well to other tasks, we found that PowerSign and AddSign improve top-1 and top-5 accuracy of a state-of-the-art ImageNet mobile-sized model by up to 0.4%. They also work well on Google’s Neural Machine Translation system, giving an improvement of up to 0.7 using bilingual evaluation metrics (BLEU) on an English to German translation task.
We are excited that Neural Optimizer Search can not only improve the performance of machine learning models but also potentially lead to new, interpretable equations and discoveries. It is our hope that open sourcing these optimizers in Tensorflow will be useful to machine learning practitioners.