AIA Show Review

Fun and interesting few days in New York for the annual AIA show.  I gained some excellent insight and as always got to visit with the best and brightest in our industry.  Before I get into the show review and all that came with it, I have to say just the overall vibe of the Times Square area is craziness.  So many people, so much going on.  It never ceases to amaze me.  Also I did trek away from Times Square for lunch at what I was told was the best pizza in New York- John’s on Bleeker and it lived up to the billing.  Also kudos to my Uber driver who somehow crammed his car the tightest of spaces to maneuver through the thickest traffic ever to get me out of town and to the airport.  Needless to say I could never ever drive in that city. 

Ok on to our world and the show…
The overall atmosphere out our industry and the markets was exceedingly positive. Many that I talked with were very bullish for the next 2-3 years showing excellent metrics and a strong foundation of business and growth.  That was exciting to hear.  It didn’t hurt that the Architectural Billings Index (ABI) released during the show was excellent yet again.  So things are rolling and that is something to feel good about.  And yes I should add the disclaimer here that this is all good based on nothing happening at the political level- which quite frankly changes minute to minute anymore.

As for the show itself…
It was solid and better than past events but probably still not what it should and could be.  But for those exhibitors who suffered through Orlando’s mess last year, they at least had something to hang their hat on this time.  Probably the issue with the show that stuck me the most was the show floor was split with exhibits on the 3rdand 1st floors.  The 3rdfloor was huge, well lit, and featured a lot of very big names.  The 1st floor was darker, featured lower ceilings, smaller booths, and despite having big time companies there just felt different.  There was no signage in either hall promoting there was another show floor and I know many architects & attendees had no idea that there were 2 floors.  I know a few folks on the 1stfloor had good shows, but I believe those on the 3rd did much better.  I just hate when trade shows break up the floor like that- just not good or fair to anyone except the trade show organizer. 

Ok now on to the people and companies… 
NGA had their booth set up to answer code and technical questions with the brilliant duo of Urmilla Jokhu-Sowell and Dr. Tom Culp and they were swamped.  Loved the education approach because as we all know, the more education and technical we can teach the architects the better.  EFCO was back in the show with a very impressive booth and I loved visiting with Joseph Holmes for a few minutes there.  Very good guy!  It’s been a while since I have seen Jerry Schwabauer and Patrick Muessig of Azon.  They had an incredible product that they are working with Quaker Window on that featured extremely high performing numbers thanks to their product inside.  It’s a new release from them and you will surely be hearing more about that in the future.  I love the continuing innovation path there!

More innovation was on display at VIG Technologies, they had a very interactive display showing their Vacuum Insulating Glass in a heat box as well as a cool acoustical box that really showed the performance of the product.   Vistamatic also had really amazing pieces on display that impressed me on several levels.  Their booth was striking and I give them credit for making it work since much of it got damaged on the way from the holding area to their booth spot. 

It was fun meeting up with Ted Bleecker at the busy SuperSky Products booth.  Also seeing Brian Thomas there was a bonus.  Good company and great bunch of folks.  Obviously when it comes to people I consider great, that is usually everyone associated with Viracon.  Nicer to me than I deserve.  Their show performance (busy & interesting exhibit) was impressive. 

It’s absolutely awesome to have Dan Plotnick back in our industry.  Dan has been a favorite of mine for many years and after spending a decade+ overseas, and time on the residential side, he’s back at Solar Seal and the CGH companies.  Great add there for them and I will be doing a “Big 3” interview with Dan later this summer because his story is fascinating. 

I always love seeing the float people and seeing how they’ll be moving the needle product wise in the next few years.  The Guardian Glass booth was fabulous and they had at least 3 pieces of very big news (Bird Deterrent Glass, VIG, Jumbo) that will be positive disruptions in the industry.  Thanks as always to the great Chris Dolan and team for being so welcoming.  Vitro also was making news with their Acuity product (love the name, logo and look- don’t doubt the talented Rob Struble as he nailed it again) as well as their push into bigger sizes.  Plus seeing my old pal Steve Cohen there was very cool.  Over at AGC there was a bunch of activity happening but all I cared about was saying hi to my old pal Matt Ferguson.  Just hearing that voice again- that very distinctive sound and drawl, made the show for me.  Good to see him and everyone else there.

Wrapping up, I saw James Wright and his energy and positive approach is infectious.   Same with my old pal Danik Dancause of Walker.  They debuted a new booth that was so beautiful and impressive that no one noticed that incredible suit Danik rocked on day 1.  Folks that’s a good booth when that happens.   I am sure I am missing a few that I should be noting, but I think my head is still trying to acclimate to being back in my sleepy Michigan town vs. the bright lights of New York City. 

Overall very good stuff… Now all eyes turn to GlassBuild America.  That’s next and it’s going to be incredible. I am irrationally confident about it and you’ll be hearing more from me in the coming weeks for sure. 

Elsewhere….
Because this review is so long, they’ll be no Big 3 interview this week.  But stay tuned the ones I have coming up I believe are outstanding and I am so thankful for all of the positive reaction so far.   I really appreciate those who read and especially those who being interviewed for sharing their insights etc.

Quick major congrats to Dan McCrickard.  Dan is a class man and friend and he just landed and excellent gig at ASSA ABLOY.  Strong company adds talent- love it!  Happy for you Dan!

Last this week- just a programming note… no blog next week as Canada Day and the 4th of July holiday will be upon us.  I will be back in this space the week of July 8th with the latest Glass Magazine review, Big 3 interview, and more.  I sincerely hope you and yours enjoy whichever holiday you are celebrating and please stay safe!! (I hate fireworks- please be careful if you are messing with them!)

LINKS of the WEEK

–  Free train rides- but really should be for life after this right?
–  Deep and fascinating read on tracking the possible Zodiac killer.
VIDEO of the WEEK

You know me- you know I love Rocky and the now the Creed series… the latest trailer for Creed II is out… looks promising!

Continua a leggere

Pubblicato in Senza categoria

Check out TBLeague (formerly Phicen Ltd) 1/6th scale Bloodshot 12-inch Action Figure Preview

Pre-order Bloodshot 1/6 Scale Figure at BBTS (link HERE)

Bloodshot is a fictional comic book superhero appearing in books published by the American publisher Valiant Comics. Brought back from the dead and infused with cutting-edge nanotechnology, Bloodshot became a nearly unstoppable killing machine. His enhanced strength, speed, endurance, and healing made him the perfect weapon. After defying his programming and escaping his masters in Project Rising Spirit, Bloodshot now fights to rediscover the secret of his true identity… but remains haunted by the past that nearly destroyed him.

TBLeague (formerly Phicen Ltd) PL2018-119 1/6th scale Bloodshot 12-inch Action Figure features: head sculpt, TB League male seamless body with metal skeleton, 2 pairs x interchangeable hands, T-shirt, pants, boots, belt, 2 x cartridge belts, samurai sword, sheath for sword with carrying strap, 3 x handguns with holsters, strap for holding 1 handgun to the thigh, combined gun, fire hydrant, base

Scroll down to see the rest of the pictures.
Click on them for bigger and better views.

Continua a leggere

Pubblicato in Senza categoria

Teaching Uncalibrated Robots to Visually Self-Adapt

Posted by Fereshteh Sadeghi, Student Researcher, Google Brain Team

People are remarkably proficient at manipulating objects without needing to adjust their viewpoint to a fixed or specific pose. This capability (referred to as visual motor integration) is learned during childhood from manipulating objects in various situations, and governed by a self-adaptation and mistake correction mechanism that uses rich sensory cues and vision as feedback. However, this capability is quite difficult for vision-based controllers in robotics, which until now have been built on a rigid setup for reading visual input data from a fixed mounted camera which should not be moved or repositioned at train and test time. The ability to quickly acquire visual motor control skills under large viewpoint variation would have substantial implications for autonomous robotic systems — for example, this capability would be particularly desirable for robots that can help rescue efforts in emergency or disaster zones.

In “Sim2Real Viewpoint Invariant Visual Servoing by Recurrent Control” presented at CVPR 2018 this week, we study a novel deep network architecture (consisting of two fully convolutional networks and a long short-term memory unit) that learns from a past history of actions and observations to self-calibrate. Using diverse simulated data consisting of demonstrated trajectories and reinforcement learning objectives, our visually-adaptive network is able to control a robotic arm to reach a diverse set of visually-indicated goals, from various viewpoints and independent of camera calibration.

Viewpoint invariant manipulation for visually indicated goal reaching with a physical robotic arm. We learn a single policy that can reach diverse goals from sensory input captured from drastically different camera viewpoints. First row shows the visually indicated goals.


The Challenge
Discovering how the controllable degrees of freedom (DoF) affect visual motion can be ambiguous and underspecified from a single image captured from an unknown viewpoint. Identifying the effect of actions on image-space motion and successfully performing the desired task requires a robust perception system augmented with the ability to maintain a memory of past actions. To be able to tackle this challenging problem, we had to address the following essential questions:

  • How can we make it feasible to provide the right amount of experience for the robot to learn the self-adaptation behavior based on pure visual observations that simulate a lifelong learning paradigm?
  • How can we design a model that integrates robust perception and self-adaptive control such that it can quickly transfer to unseen environments?

To do so, we devised a new manipulation task where a seven-DoF robot arm is provided with an image of an object and is directed to reach that particular goal amongst a set of distractor objects, while viewpoints change drastically from one trial to another. In doing so, we were able to simulate both the learning of complex behaviors and the transfer to unseen environments.

Visually indicated goal reaching task with a physical robotic arm and diverse camera viewpoints.

Harnessing Simulation to Learn Complex Behaviors
Collecting robot experience data is difficult and time-consuming. In a previous post, we showed how to scale up learning skills by distributing the data collection and trials to multiple robots. Although this approach expedited learning, it is still not feasibly extendable to learning complex behaviors such as visual self-calibration, where we need to expose robots to a huge space of various viewpoints. Instead, we opt to learn such complex behavior in simulation where we can collect unlimited robot trials and easily move the camera to various random viewpoints. In addition to fast data collection in simulation, we can also surpass hardware limitations requiring the installation of multiple cameras around a robot.

We use domain randomization technique to learn generalizable policies in simulation.

To learn visually robust features to transfer to unseen environments, we used a technique known as domain randomization (a.k.a. simulation randomization) introduced by Sadeghi & Levine (2017), that enables robots to learn vision-based policies entirely in simulation such that they can generalize to the real world. This technique was shown to work well for various robotic tasks such as indoor navigation, object localization, pick and placing, etc. In addition, to learn complex behaviors like self-calibration, we harnessed the simulation capabilities to generate synthetic demonstrations and combined reinforcement learning objectives to learn a robust controller for the robotic arm.

Viewpoint invariant manipulation for visually indicated goal reaching with a simulated seven-DoF robotic arm. We learn a single policy that can reach diverse goals from sensory input captured from dramatically different camera viewpoints.


Disentangling Perception from Control
To enable fast transfer to unseen environments, we devised a deep neural network that combines perception and control trained end-to-end simultaneously, while also allowing each to be learned independently if needed. This disentanglement between perception and control eases transfer to unseen environments, and makes the model both flexible and efficient in that each of its parts (i.e. ‘perception’ or ‘control’) can be independently adapted to new environments with small amounts of data. Additionally, while the control portion of the network was entirely trained by the simulated data, the perception part of our network was complemented by collecting a small amount of static images with object bounding boxes without needing to collect the whole action sequence trajectory with a physical robot. In practice, we fine-tuned the perception part of our network with only 76 object bounding boxes coming from 22 images.

Real-world robot and moving camera setup. First row shows the scene arrangements and the second row shows the visual sensory input to the robot.

Early Results
We tested the visually-adapted version of our network on a physical robot and on real objects with drastically different appearances than the ones used in simulation. Experiments were performed with both one or two objects on a table — “seen objects” (as labeled in the figure below) were used for visual adaptation using small collection of real static images, while “unseen objects” had not been seen during visual adaptation. During the test, the robot arm was directed to reach a visually indicated object from various viewpoints. For the two object experiments the second object was to “fool” the robotic arm. While the simulation-only network has good generalization capability (due to being trained with domain randomization technique), the very small amount of static visual data to visually adapt the controller boosted the performance, due to the flexible architecture of our network.

After adapting the visual features with the small amount of real images, performance was boosted by more than 10%. All used real objects are drastically different from the objects seen in simulation.

We believe that learning online visual self-adaptation is an important and yet challenging problem with the goal of learning generalizable policies for robots that can act in diverse and unstructured real world setup. Our approach can be extended to any sort of automatic self-calibration. See the video below for more information on this work.

Acknowledgements
This research was conducted by Fereshteh Sadeghi, Alexander Toshev, Eric Jang and Sergey Levine. We would also like to thank Erwin Coumans and Yunfei Bai for providing pybullet, and Vincent Vanhoucke for insightful discussions.

Continua a leggere

Pubblicato in Senza categoria

Hot Toys 1/6th scale Star Wars: Episode III Revenge of the Sith Anakin Skywalker (Dark Side) 12-inch Collectible Figure Preview Pics

“You underestimate my power!” – Anakin Skywalker

Once a heroic Jedi Knight, Anakin Skywalker was seduced by the dark side of the Force, became a Sith Lord, and led the Empire’s eradication of the Jedi Order. Tasked with finding and stopping Anakin, Obi-Wan Kenobi has intercepted the fallen Jedi on the planet of Mustafar and fought his former apprentice in an intense lightsaber duel!

With great excitement, Hot Toys is pleased to officially introduce the widely-anticipated 1/6th scale Anakin Skywalker (Dark Side) collectible figure from Star Wars: Episode III Revenge of the Sith as one of this year’s Toy Fair Exclusive items.

Sophisticatedly crafted based on the appearance of Anakin Skywalker in the film, the remarkable 1/6th scale collectible figure features a newly painted head sculpt with stunning likeness and the iconic yellow eyes of the young Skywalker turned Sith Lord, a specially tailored Jedi robe and tunic, an interchangeable mechno right arm, a LED light-up lightsaber, and a LED light-up Mustafar panning droid diorama figure base!

Scroll down to see all the pictures.
Click on them for bigger and better views.

Hot Toys MMS486 Star Wars: Episode III Revenge of the Sith 1/6th scale Anakin Skywalker (Dark Side) Collectible Figure specially features: Authentic and detailed likeness of Hayden Christensen as Anakin Skywalker in Star Wars: Episode III Revenge of the Sith | Newly painted head sculpt with iconic Sith Lord eyes, movie-accurate facial expression and detailed skin texture | Detailed hair sculpture of Anakin Skywalker’s hair style | Approximately 31 cm tall Body with over 30 points of articulations | Nine (9) pieces of interchangeable hands including (bare left hands and gloved right hands): pair of fists, pair of lightsabers holding hands, pair of the Force-using hands, pair of relaxed hands, opened left hand, An interchangeable mechno right arm

Costume: brown-colored under-tunic, dark brown-colored leather-like tunic, brown-colored Jedi robe, dark brown-colored leather-like belt, brown-colored pants, dark brown-colored leather-like textured boots

Weapons: LED-lighted blue lightsaber (blue light, battery operated), blue lightsaber blade in motion (attachable to the hilt), lightsaber hilt

Accessory: Specially designed Mustafar panning droid floating on lava diorama figure base features 2 LED lighting modes including general light effect and pulsing light effect (battery operated)

Release date: Approximately Q3 – Q4, 2018

Star Wars: Episode III Revenge of the Sith 1/6th scale Anakin Skywalker (Dark Side) collectible figure will be firstly available on-shelf at Sideshow Collectibles Booth #1929 at San Diego Comic Con 2018! Star Wars fans please stay tuned to Sideshow announcement on this awesome collectible figure! Click here for more information: https://www.sideshowtoy.com/hot-toys/

Continua a leggere

Pubblicato in Senza categoria

How Can Neural Network Similarity Help Us Understand Training and Generalization?

Posted by Maithra Raghu, Google Brain Team and Ari S. Morcos, DeepMind

In order to solve tasks, deep neural networks (DNNs) progressively transform input data into a sequence of complex representations (i.e., patterns of activations across individual neurons). Understanding these representations is critically important, not only for interpretability, but also so that we can more intelligently design machine learning systems. However, understanding these representations has proven quite difficult, especially when comparing representations across networks. In a previous post, we outlined the benefits of Canonical Correlation Analysis (CCA) as a tool for understanding and comparing the representations of convolutional neural networks (CNNs), showing that they converge in a bottom-up pattern, with early layers converging to their final representations before later layers over the course of training.

In “Insights on Representational Similarity in Neural Networks with Canonical Correlation” we develop this work further to provide new insights into the representational similarity of CNNs, including differences between networks which memorize (e.g., networks which can only classify images they have seen before) from those which generalize (e.g., networks which can correctly classify previously unseen images). Importantly, we also extend this method to provide insights into the dynamics of recurrent neural networks (RNNs), a class of models that are particularly useful for sequential data, such as language. Comparing RNNs is difficult in many of the same ways as CNNs, but RNNs present the additional challenge that their representations change over the course of a sequence. This makes CCA, with its helpful invariances, an ideal tool for studying RNNs in addition to CNNs. As such, we have additionally open sourced the code used for applying CCA on neural networks with the hope that will help the research community better understand network dynamics.

Representational Similarity of Memorizing and Generalizing CNNs
Ultimately, a machine learning system is only useful if it can generalize to new situations it has never seen before. Understanding the factors which differentiate between networks that generalize and those that don’t is therefore essential, and may lead to new methods to improve generalization performance. To investigate whether representational similarity is predictive of generalization, we studied two types of CNNs:

  • generalizing networks: CNNs trained on data with unmodified, accurate labels and which learn solutions which generalize to novel data.
  • memorizing networks: CNNs trained on datasets with randomized labels such that they must memorize the training data and cannot, by definition, generalize (as in Zhang et al., 2017).

We trained multiple instances of each network, differing only in the initial randomized values of the network weights and the order of the training data, and used a new weighted approach to calculate the CCA distance measure (see our paper for details) to compare the representations within each group of networks and between memorizing and generalizing networks.

We found that groups of different generalizing networks consistently converged to more similar representations (especially in later layers) than groups of memorizing networks (see figure below). At the softmax, which denotes the network’s ultimate prediction, the CCA distance for each group of generalizing and memorizing networks decreases substantially, as the networks in each separate group make similar predictions.

Groups of generalizing networks (blue) converge to more similar solutions than groups of memorizing networks (red). CCA distance was calculated between groups of networks trained on real CIFAR-10 labels (“Generalizing”) or randomized CIFAR-10 labels (“Memorizing”) and between pairs of memorizing and generalizing networks (“Inter”).

Perhaps most surprisingly, in later hidden layers, the representational distance between any given pair of memorizing networks was about the same as the representational distance between a memorizing and generalizing network (“Inter” in the plot above), despite the fact that these networks were trained on data with entirely different labels. Intuitively, this result suggests that while there are many different ways to memorize the training data (resulting in greater CCA distances), there are fewer ways to learn generalizable solutions. In future work, we plan to explore whether this insight can be used to regularize networks to learn more generalizable solutions.

Understanding the Training Dynamics of Recurrent Neural Networks
So far, we have only applied CCA to CNNs trained on image data. However, CCA can also be applied to calculate representational similarity in RNNs, both over the course of training and over the course of a sequence. Applying CCA to RNNs, we first asked whether the RNNs exhibit the same bottom-up convergence pattern we observed in our previous work for CNNs. To test this, we measured the CCA distance between the representation at each layer of the RNN over the course of training with its final representation at the end of training. We found that the CCA distance for layers closer to the input dropped earlier in training than for deeper layers, demonstrating that, like CNNs, RNNs also converge in a bottom-up pattern (see figure below).

Convergence dynamics for RNNs over the course of training exhibit bottom up convergence, as layers closer to the input converge to their final representations earlier in training than later layers. For example, layer 1 converges to its final representation earlier in training than layer 2 than layer 3 and so on. Epoch designates the number of times the model has seen the entire training set while different colors represent the convergence dynamics of different layers.

Additional findings in our paper show that wider networks (e.g., networks with more neurons at each layer) converge to more similar solutions than narrow networks. We also found that trained networks with identical structures but different learning rates converge to distinct clusters with similar performance, but highly dissimilar representations. We also apply CCA to RNN dynamics over the course of a single sequence, rather than simply over the course of training, providing some initial insights into the various factors which influence RNN representations over time.

Conclusions
These findings reinforce the utility of analyzing and comparing DNN representations in order to provide insights into network function, generalization, and convergence. However, there are still many open questions: in future work, we hope to uncover which aspects of the representation are conserved across networks, both in CNNs and RNNs, and whether these insights can be used to improve network performance. We encourage others to try out the code used for the paper to investigate what CCA can tell us about other neural networks!

Acknowledgements
Special thanks to Samy Bengio, who is a co-author on this work. We also thank Martin Wattenberg, Jascha Sohl-Dickstein and Jon Kleinberg for helpful comments.

Continua a leggere

Pubblicato in Senza categoria

E&S Special Air Service (SAS) Counter Revolutionary Warfare (CRW) Assaulter figure

The Special Air Service (SAS) is the British Army’s most renowned Special Forces unit. The SAS can trace its existence back to 1941 when British Army volunteers conducted raids behind enemy lines in the North African Campaign of World War II. Currently the unit undertakes a number of roles including covert reconnaissance, counter-terrorism, direct action and hostage rescue.

Much of the information and actions regarding the SAS is highly classified, and is not commented on by the British government or the Ministry of Defence due to the sensitivity of their operations.

From the moment several black-clad figures appeared on the balconies of the Iranian Embassy in London in 1980, the Special Air Service became ‘celebrities’ both at home and overseas earning them the reputation as the worlds most elite Special Force. The Regiment is also famous for its motto “Who Dares Wins”.

An exclusive version only available at Green Wolf Gear, this 1/6th scale SAS CRW Assaulter figure is similar to the one posted earlier on this toy blog (pics HERE).

Scroll down to see all the pictures.
Click on them for bigger and better views.

Continua a leggere

Pubblicato in Senza categoria

Great Twins Announces Twelfth Scale T-800 Supreme Action Figure from T2: Judgement Day

Great Twins “Twelfth Scale Supreme Action Figure” specializes in collectible action figures in fabric tailored clothing, and related accessories, sized in 1/12th scale (approx. 6 inches tall).

“Twelfth Scale Supreme Action Figure” presents the 1/12th scale collectible action figure of “T-800″ from “Terminator 2: Judgement Day”! The hyper-realistic collectible figure features a head sculpt with a hand painted likeness of actor Arnold Schwarzenegger in his appearance as “T-800″, mounted atop a magnetic neck which enables flexible rotation for the head, for optimum poseability. The Action body features over 25 Points of Articulation, and is clad in his iconic black PU leather jacket and pants ensemble, and not forgetting his (removable) sunglasses, to complete that cool-”Infiltrator-Unit-killer-robot”-look! An amazing array of accessories include a Mini Gun, Grenade Launcher, Shotgun and Pistol, complete with a removable Grenade Launcher ammo bandolier and Mini Gun black ammo bag provides maximum play, and that the T-800 is ready to do battle at Cyberdyne building to destroy Skynet! A figure stand with name tag completes the set – ensuring stability on your display shelf, and a pride amongst your collection.

Great Twins Twelfth Scale T-800 Supreme Action Figure from Terminator 2: Judgement Day features: Detailed head sculpture featuring hand painted likeness of Arnold Schwarzenegger in his appearance as T-800 in “Terminator 2: Judgement Day” | Magnetic neck to enable flexible rotation for the head | Approximately 16cm tall Action body with over 25 points of articulation | removable sunglasses | Three (3) pairs of interchangeable hands | highly detailed black PU leather jacket and pants with belt | Grey T-shirt | black PU leather boots | Mini Gun black ammo bag | Mini Gun | Grenade Launcher | Removable Grenade Launcher ammo bandolier | Shotgun | Pistol | Figure Stand

Scroll down to see the rest of the pictures.
Click on them for bigger and better views.

Related posts:
Hot Toys MMS117 “Terminator 2: Judgment Day” 1/6th scale TERMINATOR T-800 cyborg (Arnold Schwarzenegger) collectible figure review posted on my toy blog HERE
Hot Toys MMS117 “Terminator 2: Judgment Day” 1/6th scale TERMINATOR T-800 cyborg (Arnold Schwarzenegger) collectible figure with GE M134 Minigun – pics HERE

Continua a leggere

Pubblicato in Senza categoria

Google at CVPR 2018

Posted by Christian Howard, Editor-in-Chief, Google AI Communications

This week, Salt Lake City hosts the 2018 Conference on Computer Vision and Pattern Recognition (CVPR 2018), the premier annual computer vision event comprising the main conference and several co-located workshops and tutorials. As a leader in computer vision research and a Diamond Sponsor, Google will have a strong presence at CVPR 2018 — over 200 Googlers will be in attendance to present papers and invited talks at the conference, and to organize and participate in multiple workshops.

If you are attending CVPR this year, please stop by our booth and chat with our researchers who are actively pursuing the next generation of intelligent systems that utilize the latest machine learning techniques applied to various areas of machine perception. Our researchers will also be available to talk about and demo several recent efforts, including the technology behind portrait mode on the Pixel 2 and Pixel 2 XL smartphones, the Open Images V4 dataset and much more.

You can learn more about our research being presented at CVPR 2018 in the list below (Googlers highlighted in blue)

Organization
Finance Chair: Ramin Zabih
Area Chairs include: Sameer Agarwal, Aseem Agrawala, Jon Barron, Abhinav Shrivastava, Carl Vondrick, Ming-Hsuan Yang

Orals/Spotlights
Unsupervised Discovery of Object Landmarks as Structural Representations
Yuting Zhang, Yijie Guo, Yixin Jin, Yijun Luo, Zhiyuan He, Honglak Lee

DoubleFusion: Real-time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor
Tao Yu, Zerong Zheng, Kaiwen Guo, Jianhui Zhao, Qionghai Dai, Hao Li, Gerard Pons-Moll, Yebin Liu

Neural Kinematic Networks for Unsupervised Motion Retargetting
Ruben Villegas, Jimei Yang, Duygu Ceylan, Honglak Lee

Burst Denoising with Kernel Prediction Networks
Ben Mildenhall, Jiawen Chen, Jonathan Barron, Robert Carroll, Dillon Sharlet, Ren Ng

Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference Benoit Jacob, Skirmantas Kligys, Bo Chen, Matthew Tang, Menglong Zhu, Andrew Howard, Dmitry Kalenichenko, Hartwig Adam

AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions
Chunhui Gu, Chen Sun, David Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, Cordelia Schmid, Jitendra Malik

Focal Visual-Text Attention for Visual Question Answering
Junwei Liang, Lu Jiang, Liangliang Cao, Li-Jia Li, Alexander G. Hauptmann

Inferring Light Fields from Shadows
Manel Baradad, Vickie Ye, Adam Yedida, Fredo Durand, William Freeman, Gregory Wornell, Antonio Torralba

Modifying Non-Local Variations Across Multiple Views
Tal Tlusty, Tomer Michaeli, Tali Dekel, Lihi Zelnik-Manor

Iterative Visual Reasoning Beyond Convolutions
Xinlei Chen, Li-jia Li, Fei-Fei Li, Abhinav Gupta

Unsupervised Training for 3D Morphable Model Regression
Kyle Genova, Forrester Cole, Aaron Maschinot, Daniel Vlasic, Aaron Sarna, William Freeman

Learning Transferable Architectures for Scalable Image Recognition
Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc Le

The iNaturalist Species Classification and Detection Dataset
Grant van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, Serge Belongie

Learning Intrinsic Image Decomposition from Watching the World
Zhengqi Li, Noah Snavely

Learning Intelligent Dialogs for Bounding Box Annotation
Ksenia Konyushkova, Jasper Uijlings, Christoph Lampert, Vittorio Ferrari

Posters
Revisiting Knowledge Transfer for Training Object Class Detectors
Jasper Uijlings, Stefan Popov, Vittorio Ferrari

Rethinking the Faster R-CNN Architecture for Temporal Action Localization
Yu-Wei Chao, Sudheendra Vijayanarasimhan, Bryan Seybold, David Ross, Jia Deng, Rahul Sukthankar

Hierarchical Novelty Detection for Visual Object Recognition
Kibok Lee, Kimin Lee, Kyle Min, Yuting Zhang, Jinwoo Shin, Honglak Lee

COCO-Stuff: Thing and Stuff Classes in Context
Holger Caesar, Jasper Uijlings, Vittorio Ferrari

Appearance-and-Relation Networks for Video Classification
Limin Wang, Wei Li, Wen Li, Luc Van Gool

MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks
Ariel Gordon, Elad Eban, Bo Chen, Ofir Nachum, Tien-Ju Yang, Edward Choi

Deformable Shape Completion with Graph Convolutional Autoencoders
Or Litany, Alex Bronstein, Michael Bronstein, Ameesh Makadia

MegaDepth: Learning Single-View Depth Prediction from Internet Photos
Zhengqi Li, Noah Snavely

Unsupervised Discovery of Object Landmarks as Structural Representations
Yuting Zhang, Yijie Guo, Yixin Jin, Yijun Luo, Zhiyuan He, Honglak Lee

Burst Denoising with Kernel Prediction Networks
Ben Mildenhall, Jiawen Chen, Jonathan Barron, Robert Carroll, Dillon Sharlet, Ren Ng

Pix3D: Dataset and Methods for Single-Image 3D Shape Modeling
Xingyuan Sun, Jiajun Wu, Xiuming Zhang, Zhoutong Zhang, Tianfan Xue, Joshua Tenenbaum, William Freeman

Sparse, Smart Contours to Represent and Edit Images
Tali Dekel, Dilip Krishnan, Chuang Gan, Ce Liu, William Freeman

MaskLab: Instance Segmentation by Refining Object Detection with Semantic and Direction Features
Liang-Chieh Chen, Alexander Hermans, George Papandreou, Florian Schroff, Peng Wang, Hartwig Adam

Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning
Yin Cui, Yang Song, Chen Sun, Andrew Howard, Serge Belongie

Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks
Nick Johnston, Damien Vincent, David Minnen, Michele Covell, Saurabh Singh, Sung Jin Hwang, George Toderici, Troy Chinen, Joel Shor

MobileNetV2: Inverted Residuals and Linear Bottlenecks
Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen

ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans
Angela Dai, Daniel Ritchie, Martin Bokeloh, Scott Reed, Juergen Sturm, Matthias Nießner

Sim2Real View Invariant Visual Servoing by Recurrent Control
Fereshteh Sadeghi, Alexander Toshev, Eric Jang, Sergey Levine

Alternating-Stereo VINS: Observability Analysis and Performance Evaluation
Mrinal Kanti Paul, Stergios Roumeliotis

Soccer on Your Tabletop
Konstantinos Rematas, Ira Kemelmacher, Brian Curless, Steve Seitz

Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints
Reza Mahjourian, Martin Wicke, Anelia Angelova

AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions
Chunhui Gu, Chen Sun, David Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, Cordelia Schmid, Jitendra Malik

Inferring Light Fields from Shadows
Manel Baradad, Vickie Ye, Adam Yedida, Fredo Durand, William Freeman, Gregory Wornell, Antonio Torralba

Modifying Non-Local Variations Across Multiple Views
Tal Tlusty, Tomer Michaeli, Tali Dekel, Lihi Zelnik-Manor

Aperture Supervision for Monocular Depth Estimation
Pratul Srinivasan, Rahul Garg, Neal Wadhwa, Ren Ng, Jonathan Barron

Instance Embedding Transfer to Unsupervised Video Object Segmentation
Siyang Li, Bryan Seybold, Alexey Vorobyov, Alireza Fathi, Qin Huang, C.-C. Jay Kuo

Frame-Recurrent Video Super-Resolution
Mehdi S. M. Sajjadi, Raviteja Vemulapalli, Matthew Brown

Weakly Supervised Action Localization by Sparse Temporal Pooling Network
Phuc Nguyen, Ting LiuGautam Prasad, Bohyung Han

Iterative Visual Reasoning Beyond Convolutions
Xinlei Chen, Li-jia Li, Fei-Fei Li, Abhinav Gupta

Learning and Using the Arrow of Time
Donglai Wei, Andrew Zisserman, William Freeman, Joseph Lim

HydraNets: Specialized Dynamic Architectures for Efficient Inference
Ravi Teja Mullapudi, Noam Shazeer, William Mark, Kayvon Fatahalian

Thoracic Disease Identification and Localization with Limited Supervision
Zhe Li, Chong Wang, Mei Han, Yuan Xue, Wei WeiLi-jia Li, Fei-Fei Li

Inferring Semantic Layout for Hierarchical Text-to-Image Synthesis
Seunghoon Hong, Dingdong Yang, Jongwook Choi, Honglak Lee

Deep Semantic Face Deblurring
Ziyi Shen, Wei-Sheng Lai, Tingfa Xu, Jan Kautz, Ming-Hsuan Yang

Unsupervised Training for 3D Morphable Model Regression
Kyle Genova, Forrester Cole, Aaron Maschinot, Daniel Vlasic, Aaron Sarna, William Freeman

Learning Transferable Architectures for Scalable Image Recognition
Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc Le

Learning Intrinsic Image Decomposition from Watching the World
Zhengqi Li, Noah Snavely

PiCANet: Learning Pixel-wise Contextual Attention for Saliency Detection
Nian Liu, Junwei Han, Ming-Hsuan Yang

Mobile Video Object Detection with Temporally-Aware Feature Maps
Mason Liu, Menglong Zhu

Tutorials
Computer Vision for Robotics and Driving
Anelia Angelova, Sanja Fidler

Unsupervised Visual Learning
Pierre Sermanet, Anelia Angelova

UltraFast 3D Sensing, Reconstruction and Understanding of People, Objects and Environments
Sean Fanello, Julien Valentin, Jonathan Taylor, Christoph Rhemann, Adarsh Kowdle, Jürgen Sturm, Christine Kaeser-Chen, Pavel Pidlypenskyi, Rohit Pandey, Andrea Tagliasacchi, Sameh Khamis, David KimMingsong Dou, Kaiwen Guo, Danhang Tang, Shahram Izadi

Generative Adversarial Networks
Jun-Yan Zhu, Taesung Park, Mihaela Rosca, Phillip Isola, Ian Goodfellow

Continua a leggere

Pubblicato in Senza categoria