Announcing the YouTube-8M Segments Dataset

Posted by Joonseok Lee and Joe Yue-Hei Ng, Software Engineers, Google Research

Over the last two years, the First and Second YouTube-8M Large-Scale Video Understanding Challenge and Workshop have collectively drawn 1000+ teams from 60+ countries to further advance large-scale video understanding research. While these events have enabled great progress in video classification, the YouTube dataset on which they were based only used machine-generated video-level labels, and lacked fine-grained temporally localized information, which limited the ability of machine learning models to predict video content.

To accelerate the research of temporal concept localization, we are excited to announce the release of YouTube-8M Segments, a new extension of the YouTube-8M dataset that includes human-verified labels at the 5-second segment level on a subset of YouTube-8M videos. With the additional temporal annotations, YouTube-8M is now both a large-scale classification dataset as well as a temporal localization dataset. In addition, we are hosting another Kaggle video understanding challenge focused on temporal localization, as well as an affiliated 3rd Workshop on YouTube-8M Large-Scale Video Understanding at the 2019 International Conference on Computer Vision (ICCV’19).

YouTube-8M Segments
Video segment labels provide a valuable resource for temporal localization not possible with video-level labels, and enable novel applications, such as capturing special video moments. Instead of exhaustively labeling all segments in a video, to create the YouTube-8M Segments extension, we manually labeled 5 segments (on average) per randomly selected video on the YouTube-8M validation dataset, totalling ~237k segments covering 1000 categories.

This dataset, combined with the previous YouTube-8M release containing a very large number of machine generated video-level labels, should allow learning temporal localization models in novel ways. Evaluating such classifiers is of course very challenging if only noisy video-level labels are available. We hope that the newly added human-labeled annotations will help ensure that researchers can more accurately evaluate their algorithms.

The 3rd YouTube-8M Video Understanding Challenge
This year the YouTube-8M Video Understanding Challenge focuses on temporal localization. Participants are encouraged to leverage noisy video-level labels together with a small segment-level validation set in order to better annotate and temporally localize concepts of interest. Unlike last year, there is no model size restriction. Each of the top 10 teams will be awarded $2,500 to support their travel to Seoul to attend ICCV’19. For details, please visit the Kaggle competition page.

The 3rd Workshop on YouTube-8M Large-Scale Video Understanding
Continuing in the tradition of the previous two years, the 3rd workshop will feature four invited talks by distinguished researchers as well as presentations by top-performing challenge participants. We encourage those who wish to attend to submit papers describing their research, experiments, or applications based on the YouTube-8M dataset, including papers summarizing their participation in the challenge above. Please refer to the workshop page for more details.

It is our hope that this newest extension will serve as a unique playground for temporal localization that mimics real world scenarios. We also look forward to the new challenge and workshop, which we believe will continue to advance research in large-scale video understanding. We hope you will join us again!

This post reflects the work of many machine perception researchers including Ke Chen, Nisarg Kothari, Joonseok Lee, Hanhan Li, Paul Natsev, Joe Yue-Hei Ng, Naderi Parizi, David Ross, Cordelia Schmid, Javier Snaider, Rahul Sukthankar, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan, Yexin Wang, Zheng Xu, as well as Julia Elliott and Walter Reade from Kaggle. We are also grateful for the support and advice from our partners at YouTube.

Continua a leggere

Pubblicato in Senza categoria

I love you 3000! Hot Toys 1/6th scale LED lighted Iron Man Nano Gauntlet with articulated fingers

I love you 3000!

To celebrate the re-release of Avengers: Endgame, Hot Toys is excited to reveal the addition accessory specially designed for 1/6th scale Iron Man Mark LXXXV collectible figure to mark the occasion!

Hot Toys [Avengers: Endgame] 1/6th scale Iron Man Mark LXXXV 12-inch Collectible Figure previewed earlier on my toy blog post HERE

Interchangeable on the highly detailed Iron Man figure to recreate the MOMENT, this LED lighted Nano Gauntlet with articulated fingers has taken direct inspiration from the remarkable scene in the movie. Moreover, Hot Toys’ team has greatly improved on the characteristic design of the forearm armor to bring in the most true-to-movie details on figure!

Scroll down to see the rest of the pictures.
Click on them for bigger and better views.

Continua a leggere

Pubblicato in Senza categoria

Predicting Bus Delays with Machine Learning

Posted by Alex Fabrikant, Research Scientist, Google Research

Hundreds of millions of people across the world rely on public transit for their daily commute, and over half of the world’s transit trips involve buses. As the world’s cities continue growing, commuters want to know when to expect delays, especially for bus rides, which are prone to getting held up by traffic. While public transit directions provided by Google Maps are informed by many transit agencies that provide real-time data, there are many agencies that can’t provide them due to technical and resource constraints.

Today, Google Maps introduced live traffic delays for buses, forecasting bus delays in hundreds of cities world-wide, ranging from Atlanta to Zagreb to Istanbul to Manila and more. This improves the accuracy of transit timing for over sixty million people. This system, first launched in India three weeks ago, is driven by a machine learning model that combines real-time car traffic forecasts with data on bus routes and stops to better predict how long a bus trip will take.

The Beginnings of a Model
In the many cities without real-time forecasts from the transit agency, we heard from surveyed users that they employed a clever workaround to roughly estimate bus delays: using Google Maps driving directions. But buses are not just large cars. They stop at bus stops; take longer to accelerate, slow down, and turn; and sometimes even have special road privileges, like bus-only lanes.

As an example, let’s examine a Wednesday afternoon bus ride in Sydney. The actual motion of the bus (blue) is running a few minutes behind the published schedule (black). Car traffic speeds (red) do affect the bus, such as the slowdown at 2000 meters, but a long stop at the 800 meter mark slows the bus down significantly compared to a car.

To develop our model, we extracted training data from sequences of bus positions over time, as received from transit agencies’ real time feeds, and aligned them to car traffic speeds on the bus’s path during the trip. The model is split into a sequence of timeline units—visits to street blocks and stops—each corresponding to a piece of the bus’s timeline, with each unit forecasting a duration. A pair of adjacent observations usually spans many units, due to infrequent reporting, fast-moving buses, and short blocks and stops.

This structure is well suited for neural sequence models like those that have recently been successfully applied to speech processing, machine translation, etc. Our model is simpler. Each unit predicts its duration independently, and the final output is the sum of the per-unit forecasts. Unlike many sequence models, our model does not need to learn to combine unit outputs, nor to pass state through the unit sequence. Instead, the sequence structure lets us jointly (1) train models of individual units’ durations and (2) optimize the “linear system” where each observed trajectory assigns a total duration to the sum of the many units it spans.

To model a bus trip (a) starting at the blue stop, the model (b) adds up the delay predictions from timeline units for the blue stop, the three road segments, the white stop, etc.

Modeling the “Where”
In addition to road traffic delays, in training our model we also take into account details about the bus route, as well as signals about the trip’s location and timing. Even within a small neighborhood, the model needs to translate car speed predictions into bus speeds differently on different streets. In the left panel below, we color-code our model’s predicted ratio between car speeds and bus speeds for a bus trip. Redder, slower parts may correspond to bus deceleration near stops. As for the fast green stretch in the highlighted box, we learn from looking at it in StreetView (right) that our model discovered a bus-only turn lane. By the way, this route is in Australia, where right turns are slower than left, another aspect that would be lost on a model that doesn’t consider peculiarities of location.

To capture unique properties of specific streets, neighborhoods, and cities, we let the model learn a hierarchy of representations for areas of different size, with a timeline unit’s geography (the precise location of a road or a stop) represented in the model by the sum of the embeddings of its location at various scales. We first train the model with progressively heavier penalties for finer-grain locations with special cases, and use the results for feature selection. This ensures that fine-grained features in areas complex enough where a hundred meters affects bus behavior are taken into account, as opposed to open countryside where such fine-grained features seldom matter.

At training time, we also simulate the possibility of later queries about areas that were not in the training data. In each training batch, we take a random slice of examples and discard geographic features below a scale randomly selected for each. Some examples are kept with the exact bus route and street, others keep only neighborhood- or city-level locations, and others yet have no geographical context at all. This better prepares the model for later queries about areas where we were short on training data. We expand the coverage of our training corpus by using anonymized inferences about user bus trips from the same dataset that Google Maps uses for popular times at businesses, parking difficulty, and other features. However, even this data does not include the majority of the world’s bus routes, so our models must generalize robustly to new areas.

Learning the Local Rhythms
Different cities and neighborhoods also run to a different beat, so we allow the model to combine its representation of location with time signals. Buses have a complex dependence on time — the difference between 6:30pm and 6:45pm on a Tuesday might be the wind-down of rush hour in some neighborhoods, a busy dining time in others, and entirely quiet in a sleepy town elsewhere. Our model learns an embedding of the local time of day and day of week signals, which, when combined with the location representation, captures salient local variations, like rush hour bus stop crowds, that aren’t observed via car traffic.

This embedding assigns 4-dimensional vectors to times of the day. Unlike most neural net internals, four dimensions is almost few enough to visualize, so let’s peek at how the model arranges times of day in three of those dimensions, via the artistic rendering below. The model indeed learns that time is cyclical, placing time in a “loop”. But this loop is not just the flat circle of a clock’s face. The model learns wide bends that let other neurons compose simple rules to easily separate away concepts like “middle of the night” or “late morning” that don’t feature much bus behavior variation. On the other hand, evening commute patterns differ much more among neighborhoods and cities, and the model appears to create more complex “crumpled” patterns between 4pm-9pm that enable more intricate inferences about the timings of each city’s rush hour.

The model’s time representation (3 out of 4 dimensions) forms a loop, reimagined here as the circumference of a watch. The more location-dependent time windows like 4pm-9pm and 7am-9am get more complex “crumpling”, while big featureless windows like 2am-5am get bent away with flat bends for simpler rules. (Artist’s conception by Will Cassella, using textures from and HDRIs from hdrihaven.)

Together with other signals, this time representation lets us predict complex patterns even if we hold car speeds constant. On a 10km bus ride through New Jersey, for example, our model picks up on lunchtime crowds and weekday rush hours:

Putting it All Together
With the model fully trained, let’s take a look at what it learned about the Sydney bus ride above. If we run the model on that day’s car traffic data, it gives us the green predictions below. It doesn’t catch everything. For instance, it has the stop at 800 meters lasting only 10 seconds, though the bus stopped for at least 31 sec. But we stay within 1.5 minutes of the real bus motion, catching a lot more of the trip’s nuances than the schedule or car driving times alone would give us.

The Trip Ahead
One thing not in our model for now? The bus schedule itself. So far, in experiments with official agency bus schedules, they haven’t improved our forecasts significantly. In some cities, severe traffic fluctuations might overwhelm attempts to plan a schedule. In others, the bus schedules might be precise, but perhaps because transit agencies carefully account for traffic patterns. And we infer those from the data.

We continue to experiment with making better use of schedule constraints and many other signals to drive more precise forecasting and make it easier for our users to plan their trips. We hope we’ll be of use to you on your way, too. Happy travels!

This work was the joint effort of James Cook, Alex Fabrikant, Ivan Kuznetsov, and Fangzhou Xu, on Google Research, and Anthony Bertuca, Julian Gibbons, Thierry Le Boulengé, Cayden Meyer, Anatoli Plotnikov, and Ivan Volosyuk on Google Maps. We thank Senaka Buthpitiya, Da-Cheng Juan, Reuben Kan, Ramesh Nagarajan, Andrew Tomkins, and the greater Transit team for support and helpful discussions; as well as Will Cassella for the inspired reimagining of the model’s time embedding. We are also indebted to our partner agencies for providing the transit data feeds the system is trained on.

Continua a leggere

Pubblicato in Senza categoria

Hot Toys Spider-Man: Far From Home 1/6th Spider-Man (Upgraded Suit) Collectible Figure

The release of Spider-Man: Far From Home is just around the corner! Peter Parker plans to leave super heroics behind for a few weeks with his friends for a vacation in Europe, but several creature attacks are plaguing the continent. Nick Fury will have to call out our friendly neighborhood Spidey for his assistance!

With the reveals of brand new Spider-Man suits in the trailers and TV spots, fans are getting very excited for Spidey’s alternative appearances. Today, Hot Toys is ecstatic to bring you Spider-Man and his Upgraded Suit based on Spider-Man: Far From Home in 1/6th scale Collectible Figure prior to the official movie release!

Expertly crafted based on Tom Holland’s appearance in the movie, the latest Spider-Man figure features a newly developed head sculpt with astonishing likeness, a masked head sculpt with multiple pairs of interchangeable Spider-Man eye pieces to create numerous Spider-Man’s expressions, a beautifully designed red and black Upgraded Suit with detailed cobweb pattern, an extremely wide variety of accessories including a pair of glasses, a Spider-Man mask, a smartphone, assorted spider-web shooting effect parts, and a movie-themed dynamic figure stand.

Scroll down to see all the pictures.
Click on them for bigger and better views.

Hot Toys MMS542 1/6th scale Spider-Man (Upgraded Suit) Collectible Figure specially features: Authentic and detailed likeness of Spider-Man in Spider-Man: Far From Home | newly developed head sculpt with authentic likeness of Tom Holland as Peter Parker | interchangeable masked head sculpt with four (4) pairs of interchangeable Spider-Man eye pieces that can create numerous combinations of Spider-Man’s expressions | Approximately 28.5 cm tall Body with 30 points of articulation | Ten (10) pieces of interchangeable hands with black cobweb pattern including: pair of fists, pair of relaxed hands, pair of open hands, pair of hands for cobweb shooting, pair of hands for cobweb swinging

Costume: newly tailored red and black colored Spider-Man Upgraded suit embossed with grayish black trims, cobweb pattern and black spider emblem on chest | red-color boots embossed with grayish black cobweb pattern | Three (3) sets of magnetic web-wings

Accessories: pair of glasses | Spider-Man mask (not wearable on figure) | Six (6) strings of spider web in different shapes and lengths, attachable to the web-shooters | open spider web effect accessory | smartphone | dynamic figure stand with Spider-Man nameplate and the movie logo | *Additional accessory coming soon

Continua a leggere

Pubblicato in Senza categoria

Innovations in Graph Representation Learning

Posted by Alessandro Epasto, Senior Research Scientist and Bryan Perozzi, Senior Research Scientist, Graph Mining Team

Relational data representing relationships between entities is ubiquitous on the Web (e.g., online social networks) and in the physical world (e.g., in protein interaction networks). Such data can be represented as a graph with nodes (e.g., users, proteins), and edges connecting them (e.g., friendship relations, protein interactions). Given the widespread prevalence of graphs, graph analysis plays a fundamental role in machine learning, with applications in clustering, link prediction, privacy, and others. To apply machine learning methods to graphs (e.g., predicting new friendships, or discovering unknown protein interactions) one needs to learn a representation of the graph that is amenable to be used in ML algorithms.

However, graphs are inherently combinatorial structures made of discrete parts like nodes and edges, while many common ML methods, like neural networks, favor continuous structures, in particular vector representations. Vector representations are particularly important in neural networks, as they can be directly used as input layers. To get around the difficulties in using discrete graph representations in ML, graph embedding methods learn a continuous vector space for the graph, assigning each node (and/or edge) in the graph to a specific position in a vector space. A popular approach in this area is that of random-walk-based representation learning, as introduced in DeepWalk.

Left: The well-known Karate graph representing a social network. Right: A continuous space embedding of the nodes in the graph using DeepWalk.

Here we present the results of two recent papers on graph embedding: “Is a Single Embedding Enough? Learning Node Representations that Capture Multiple Social Contexts” presented at WWW’19 and “Watch Your Step: Learning Node Embeddings via Graph Attention” at NeurIPS’18. The first paper introduces a novel technique to learn multiple embeddings per node, enabling a better characterization of networks with overlapping communities. The second addresses the fundamental problem of hyperparameter tuning in graph embeddings, allowing one to easily deploy graph embeddings methods with less effort. We are also happy to announce that we have released the code for both papers in the Google Research github repository for graph embeddings.

Learning Node Representations that Capture Multiple Social Contexts
In virtually all cases, the crucial assumption of standard graph embedding methods is that a single embedding has to be learned for each node. Thus, the embedding method can be said to seek to identify the single role or position that characterizes each node in the geometry of the graph. Recent work observed, however, that nodes in real networks belong to multiple overlapping communities and play multiple roles—think about your social network where you participate in both your family and in your work community. This observation motivates the following research question: is it possible to develop methods where nodes are embedded in multiple vectors, representing their participation in overlapping communities?

In our WWW’19 paper, we developed Splitter, an unsupervised embedding method that allows the nodes in a graph to have multiple embeddings to better encode their participation in multiple communities. Our method is based on recent innovations in overlapping clustering based on ego-network analysis, using the persona graph concept, in particular. This method takes a graph G, and creates a new graph P (called the persona graph), where each node in G is represented by a series of replicas called the persona nodes. Each persona of a node represents an instantiation of the node in a local community to which it belongs. For each node U in the graph, we analyze the ego-network of the node (i.e., the graph connecting the node to its neighbors, in this example A, B, C, D) to discover local communities to which the node belongs. For instance, in the figure below, node U belongs to two communities: Cluster 1 (with the friends A and B, say U’s family members) and Cluster 2 (with C and D, say U’s colleagues).

Ego-net of node U

Then, we use this information to “split” node U into its two personas U1 (the family persona) and U2 (the work persona). This disentangles the two communities, so that they no longer overlap.

The ego-splitting method separating the U nodes in 2 personas.

This technique has been used to improve the state-of-the-art results in graph embedding methods, showing up to 90% reduction in link prediction (i.e., predicting which link will form in the future) error on a variety of graphs. The key reason for this improvement is the ability of the method to disambiguate highly overlapping communities found in social networks and other real-world graphs. We further validate this result with an in-depth analysis of co-authorship graphs where authors belong to overlapping research communities (e.g., machine learning and data mining).

Top Left: A typical graphs with highly overlapping communities. Top Right: A traditional embedding of the graph on the left using node2vec. Bottom Left: A persona graph of the graph above. Bottom Right: The Splitter embedding of the persona graph. Notice how the persona graph clearly disentangles the overlapping communities of the original graph and Splitter outputs well-separated embeddings.

Automatic hyper-parameter tuning via graph attention.
Graph embedding methods have shown outstanding performance on various ML-based applications, such as link prediction and node classification, but they have a number of hyper-parameters that must be manually set. For example, are nearby nodes more important to capture when learning embeddings than nodes that are further away? Even though experts may be able to fine tune these hyper-parameters, one must do so independently for each graph. To obviate such manual work, in our second paper, we proposed a method to learn the optimal hyper-parameters automatically.

Specifically, many graph embedding methods, like DeepWalk, employ random walks to explore the context around a given node (i.e. the direct neighbors, the neighbors of the neighbors, etc). Such random walks can have many hyper-parameters that allow tuning of the local exploration of the graph, thus regulating the attention given by the embeddings to nearby nodes. Different graphs may present different optimal attention patterns and hence different optimal hyperparameters (see the picture below, where we show two different attention distributions). Watch Your Step formulates a model for the performance of the embedding methods based on the above mentioned hyper-parameters. Then we optimize the hyper-parameters to maximize the performance predicted by the model, using standard backpropagation. We found that the values learned by backpropagation agree with the optimal hyper-parameters obtained by grid search.

Our new method for automatic hyper-parameter tuning, Watch Your Step, uses an attention model to learn different graph context distributions. Shown above are two example local neighborhoods about a center node (in yellow) and the context distributions (red gradient) that was learned by the model. The left-side graph shows a more diffused attention model, while the distribution on the right shows one concentrated on direct neighbors.

This work falls under the growing family of AutoML, where we want to alleviate the burden of optimizing the hyperparameters—a common problem in practical machine learning. Many AutoML methods use neural architecture search. This paper instead shows a variant, where we use the mathematical connection between the hyperparameters in the embeddings and graph-theoretic matrix formulations. The “Auto” portion corresponds to learning the graph hyperparameters by backpropagation.

We believe that our contributions will further advance the state of the research in graph embedding in various directions. Our method for learning multiple node embeddings draws a connection between the rich and well-studied field of overlapping community detection, and the more recent one of graph embedding which we believe may result in fruitful future research. An open problem in this area is the use of multiple-embedding methods for classification. Furthermore, our contribution on learning hyperparameters will foster graph embedding adoption by reducing the need for expensive manual tuning. We hope the release of these papers and code will help the research community pursue these directions.

We thank Sami Abu-el-Haija who contributed to this work and is now a Ph.D. student at USC. For more information on the Graph Mining team (part of Algorithm and Optimization), visit our pages.

Continua a leggere

Pubblicato in Senza categoria

How to Get Your Superhero Game On?

Superheroes are in. From blockbuster movies that have been raking in millions for the last decade to new TV shows, there’s apparently a lot of crime fighting to be done, and we aren’t complaining about it.

What makes the idea of superheroes so poignant is that it breaks age barriers wherein people from all walks of life enjoy the battle between good and evil knowing that there is always hope on our side that can and will make a difference in the end.

Whether you are a comic book fan, a superhero movie watcher, or just an enthusiast take your superhero game to the next level.

Gone are the days when collectables such as action figures were considered child’s play. For one, look at the prices of these models, which in some cases can go into hundreds of dollars. Then, there is the visible detail that goes into producing the figures. Action figures nowadays are no less than sculptures, creating a whole new genre for artists, that not only showcase the love for a character or a movie but appreciation for art as well. The choice though is yours in the end. You can get your superhero game on by having a few collectables featuring your favourites or as some people do, go all in and create a display that will make the most ardent of fans jealous of your collection.

What’s better than getting a chance to be entertained by your favourite characters and simultaneously make money? Living the life of a fan isn’t cheap. Everyone needs money for movies, comics, action figures, and what not, so it only makes sense to take this fascination with superheroes and include it into having some fun with online casinos. Now, once you are past choosing the right mobile casino for yourself, it is possible to play themed slots on it, that feature characters such as Batman, Superman and even the Terminator. A way to connect with the movies, having an online game is ideal when you are standing in line for Comicon tickets or just want to relax and appreciate the joys of earning profits from the comfort of your home.

A more real-life way to indulge in superhero fandom, cosplay might seem juvenile but is a lot of fun. Dressing up as a cherished character is more than just about loving them, it is a way to demonstrate your creativity and also an excellent opportunity to meet with like-minded people. Cosplay events are a regular occurrence during many comic conventions, but if nothing is happening around you, simply dress up for a movie screening.

It’s one thing watching all the superhero movies that come out, and an entirely different experience when you read comics. For starters, comic books are the base material for most of what we see on screen, and then they also make for great collectables that allow you to revisit stories any time you want. The good thing about technology is that you can now access digital comics and carry them with you while travelling, but having a unique first edition in your hands, and flipping through the pages discovering the origins of a superhero, is a remarkable feeling like no other. Continua a leggere

Pubblicato in Senza categoria

Review and Interview

We kick off this week with the latest Glass Magazine review and it’s a favorite of mine because it combines the “Top Glazier” issue with an awesome custom GlassBuild cover.  Good stuff right off the bat!  Because the focus is the “Top 50 Glaziers” this is a jam-packed edition with everything you could possibly want data and detail wise.  Also inside this issue- a tremendous article from Greg Oehlers along with a great piece on workforce development.  Great insights and should not be missed.  Meanwhile ad of the month was tough because this is a popular issue, there’s a lot more ads… but the winner is my friends from Bohle America.  Gareth Francey designed a piece that got me to stop and look.  That is always a big key for me ad wise.  Really easy on the eyes and interests me for more info.  Well done and congrats!

Before I get to this week’s interview- just a couple of quick notes…

Long time industry leader Ron Parker is leading a charge to defeat ALS.  Here is more info on how you can help!

Ride to Defeat ALS will be held on Saturday, July 20th at Mt. Angel, Oregon. If you would like to donate to support those living with ALS and their families, please click here!

Each and every donation will:

·      Fund a treatment and finding a cure for ALS

·      Provide hands-on support to local families during their journey with ALS

·      Raise awareness for a disease that is NOT rare and occurs every 90 minutes in the US

Your gift to this worthy cause is tax-deductible to the fullest extent allowed by law
–  No blog post next week since it’s leading into the 4thof July holiday in the US.  Hope everyone celebrating and has a safe and enjoyable holiday.

Big 3 Interview

Monique Salas, National Healthcare Business Development Manager, SAGE Glass

This was a fun one for me as Monique brought totally different skill sets to our industry (She was in Pharmaceuticals) and she is a must follow/connect on LinkedIn.  In addition as those of you who read this blog consistently know I am huge cheerleader on dynamic glass, so the fact Monique has an incredible understanding and approach with it, was driving force to do these 3 questions…

You have extensive experience in the dynamic glass space.  There is great confidence that this space will continue to have significant growth.  Aside from the fact you sell it, why are you so bullish on these products?

I have a sincere desire to make spaces cleaner and more beautiful.  Our living and healing spaces are very important for our mental and physical health.  Natural light is a significant component that aids overall wellness.  Starting in the late 70’s, researchers started to study the impact of natural light on patients. Overwhelmingly, patients exposed to natural light began to heal faster, require less medication and report increased comfort in the presence of natural light.   Smart glass now offers the missing element and I find that incredibly exciting.  A façade that changes without disruption of color or uniformity on the exterior, yet provides thermal comfort and greater satisfaction for occupants inside. It is a winning combo that meets the needs of the design community, building owners and most importantly, patients.

Imagine if you will, you walk into a hospital and in the lobby, there are no blinds or curtains.  Yet, the welcome staff is not interrupted by glare or heat. It sounds silly, but these are real solutions to increase productivity and thermal comfort.  Now, take it a step further and imagine you are a patient in a hospital room with little or no mobility. You want to see outside, but that would depend on your Nurse coming in to adjust your blinds of curtains. This could be several minutes or even hours away, depending on how many patients they oversee.  In my opinion, this can be solved in designing spaces with smart glass intelligence.  I have had the unique ability to sell in both spaces, thermochromic & electrochromic.  Thermochromic being a passive technology that operates on radiant heat; Electrochromic an active technology that allows occupants to override with control (app or wall device).   I have come to respect each type or now believe that they should be used in collaboration.

Thermochromic in common spaces, where control is not necessary (lobbies, hallways, and prescription pick up).  An electrochromic in patient rooms, giving the patients the ability to use an app to control their own thermal comfort.  I hope leaders in both subcategories will start to work together on projects to meet the needs of the client.   To me, it is not a one size fits all, but a true deep dive into the building delivering evidenced based designs fusing Thermochromic and Electrochromic.

I’m a big fan of yours for a bunch of reasons but maybe the biggest is you have a sincere desire to constantly be giving back.  Where did this value come from and why should we as a society be doing more of this?

First off, that is very kind; thank you that means the world to me.  I would say that there are many contributors ranging from experiencing the adversity of a mixed raced background to the lessons of gratitude & kindness instilled in me by my Grandfather who passed away when I was 10.  I started off my career in the non-profit and quite frankly wanted to “change the world.”  I don’t think it is uncommon for young college graduates to have these ideals. The reality is the burden of education debt, often command paths.  Living in the Bay Area on a non-profit income is very difficult, if not impossible. In such, I made a conscious choice to exit and enter into a profit generating space.  However, the agreement I made with myself is to not abandon my desire to impact the world positively.

We can all do something for someone. This includes the Earth we live on and all of the inhabitants that exist together.  Recently, I have made attempts to help save the monarch butterfly population with the simple act of dedicating space in my yard for the plants they enjoy. These are the types of activities, if done by several of us, can reinvigorate an entire population of butterflies. It is birthed in the philosophy of acting locally but think globally.  I believe that many people have a sincere desire to do something but feel overwhelmed on the various choices of “volunteerism” and the commitment therein.   The truth is, we can all do small acts that can help us feel like we are making a difference.  Because at the end of our lives, we are not going to be happy with how much money we made.  We are going to remember the lives we impacted & the differences we made.

You have been associated with the health care world for a great portion of your professional career, so I have to ask which professional is more challenging (can be both good and bad) to work with- the Doctor or the Architect?

Ha! This is a GREAT question and hilarious! ARCHITECTS for sure.  In my time in the Pharmaceutical industry I had to work closely with Physicians to help meet the needs of their patient populations.  Therein, there was a clear connection to medication and outcomes. By that I mean, if your patient has an elevated AIC and I have the leading Diabetes medication on the market, there are clear evidenced based connections for our dialogue and collaboration.  However, we are not quite there with the design community and Smart glass. Even though, the data exists on natural light & we have shifted into evidenced based design as a standard, resistance remains widespread largely due to color.  I have heard from many Architects that they believe smart glass is just too dark. 

The reality is that the rendering never includes blinds or curtains. Architects demo a beautiful picture that is not realistic.  In reality blinds and/or curtains are typically down when the occupants have inhabited the space, which equals little or no access to natural light, resulting a dark or artificially lit space.  I hope that more Architects will start to apply a larger lens when thinking of designing with Smart glass in SD or DD.   Money can actually be saved with using Smart glass earlier; results being  smaller HVAC systems, blind reduction/elimination and spaces can be reimagined to produced better outcomes.  I am hopeful that Architects with start to see Smart glass in the same way Physicians see medication… as a tool toward provide wellness.


This spices up a boring government meeting.
LOOK OUT- This gin got recalled for having TOO MUCH alcohol in it!
The do say everyone has a twin… and in this case that was true and huge.

With Independence Day in the US coming up- I decided to go with one of my favorite Muppets (Sam the Eagle) to take the patriotic role in promoting this holiday in the video of the week!

Continua a leggere

Pubblicato in Senza categoria

Hot Toys 1/6th scale Spider-Man (Stealth Suit) 12-inch Collectible Figure (Deluxe Version)

It’s officially two weeks away from Spider-Man: Far From Home arrives in cinemas! Peter Parker returns in this chapter for a summer trip to Europe with his best friends, and he had hoped to leave Super-Heroics behind for a few weeks. While aboard, the young Spidey is recruited by Nick Fury to investigate a series of attacks, creating havoc across the continent.

To prepare fans for Spider-Man’s latest adventure, Hot Toys is very excited to officially present today, the Deluxe Version 1/6th scale Spider-Man Collectible Figure featuring him in his brand new tactical Stealth Suit!

Sophisticatedly crafted with striking likeness of Spider-Man’s appearance portrayed by Tom Holland in the movie, the figure features a newly developed masked head sculpt exposes part of face with 3 pairs of interchangeable Spider-Man eye pieces to create numerous Spider-Man’s expressions, a newly tailored black stealth suit with intricate details, not to mention a variety of spider-web shooting effect parts for dynamic poses.

Hot Toys is pleased to have the first diorama figure base beautifully designed by Studio HIVE for the Deluxe Version. Inspired by the battle scenes against Molten Man, this elaborated diorama figure base features 2 LED light up modes is absolutely a must-have for fans of Spider-Man.

Scroll down to see the rest of the pictures.
Click on them for bigger and better views.

Hot Toys MMS541 1/6th scale Spider-Man (Stealth Suit) Collectible Figure (Deluxe Version) specially features: Authentic and detailed likeness of Spider-Man in Spider-Man: Far From Home | newly developed masked head sculpt exposes only the eyes, with three (3) pairs of interchangeable Spider-Man eye pieces that can create numerous combinations of Spider-Man’s expressions | Approximately 28.5 cm tall Body with 30 points of articulation | Ten (10) pieces of interchangeable hands with black cobweb pattern including: pair of fists, pair of relaxed hands, pair of palms for cobweb shooting, pair of palms for cobweb swinging, pair of open hands

Costume: newly developed black colored Spider-Man stealth suit, black-color belt, black-color boots embossed with pattern, Two (2) pairs of black-color web-shooters

Accessories: Six (6) strings of spider web in different shapes and lengths, attachable to the web-shooters | One (1) open spider web effect accessory | Molten Man diorama figure base features 2 LED lighting modes including general light effect and pulsing light effect (battery operated), specially designed by Studio HIVE***

*** Exclusive to Deluxe Version

Release date: Approximately Q3 – Q4, 2020 Continua a leggere

Pubblicato in Senza categoria

Off-Policy Classification – A New Reinforcement Learning Model Selection Method

Posted by Alex Irpan, Software Engineer, Robotics at Google

Reinforcement learning (RL) is a framework that lets agents learn decision making from experience. One of the many variants of RL is off-policy RL, where an agent is trained using a combination of data collected by other agents (off-policy data) and data it collects itself to learn generalizable skills like robotic walking and grasping. In contrast, fully off-policy RL is a variant in which an agent learns entirely from older data, which is appealing because it enables model iteration without requiring a physical robot. With fully off-policy RL, one can train several models on the same fixed dataset collected by previous agents, then select the best one. However, fully off-policy RL comes with a catch: while training can occur without a real robot, evaluation of the models cannot. Furthermore, ground-truth evaluation with a physical robot is too inefficient to test promising approaches that require evaluating a large number of models, such as automated architecture search with AutoML.

This challenge motivates off-policy evaluation (OPE), techniques for studying the quality of new agents using data from other agents. With rankings from OPE, we can selectively test only the most promising models on real-world robots, significantly scaling experimentation with the same fixed real robot budget.

A diagram for real-world model development. Assuming we can evaluate 10 models per day, without off-policy evaluation, we would need 100x as many days to evaluate our models.

Though the OPE framework shows promise, it assumes one has an off-policy evaluation method that accurately ranks performance from old data. However, agents that collected past experience may act very differently from newer learned agents, which makes it hard to get good estimates of performance.

In “Off-Policy Evaluation via Off-Policy Classification”, we propose a new off-policy evaluation method, called off-policy classification (OPC), that evaluates the performance of agents from past data by treating evaluation as a classification problem, in which actions are labeled as either potentially leading to success or guaranteed to result in failure. Our method works for image (camera) inputs, and doesn’t require reweighting data with importance sampling or using accurate models of the target environment, two approaches commonly used in prior work. We show that OPC scales to larger tasks, including a vision-based robotic grasping task in the real world.

How OPC Works
OPC relies on two assumptions: 1) that the final task has deterministic dynamics, i.e. no randomness is involved in how states change, and 2) that the agent either succeeds or fails at the end of each trial. This second “success or failure” assumption is natural for many tasks, such as picking up an object, solving a maze, winning a game, and so on. Because each trial will either succeed or fail in a deterministic way, we can assign binary classification labels to each action. We say an action is effective if it could lead to success, and is catastrophic if it is guaranteed to lead to failure.

OPC utilizes a Q-function, learned with a Q-learning algorithm, that estimates the future total reward if the agent chooses to take some action from its current state. The agent will then choose the action with the largest total reward estimate. In our paper, we prove that the performance of an agent is measured by how often its chosen action is an effective action, which depends on how well the Q-function correctly classifies actions as effective vs. catastrophic. This classification accuracy acts as an off-policy evaluation score.

However, the labeling of data from previous trials is only partial. For example, if a previous trial was a failure, we do not get negative labels because we do not know which action was the catastrophic one. To overcome this, we leverage techniques from semi-supervised learning, positive-unlabeled learning in particular, to get an estimate of classification accuracy from partially labeled data. This accuracy is the OPC score.

Off-Policy Evaluation for Sim-to-Real Learning
In robotics, it’s common to use simulated data and transfer learning techniques to reduce the sample complexity of learning robotics skills. This can be very useful, but tuning these sim-to-real techniques for real-world robotics is challenging. Much like off-policy RL, training doesn’t use the real robot, because it is trained in simulation, but evaluation of that policy still needs to use a real robot. Here, off-policy evaluation can come to the rescue again—we can take a policy trained only in simulation, then evaluate it using previous real-world data to measure its transfer to the real robot. We examine OPC across both fully off-policy RL and sim-to-real RL.

An example of how simulated experience can differ from real-world experience. Here, simulated images (left) have much less visual complexity than real-world images (right).

First, we set up a simulated version of our robot grasping task, where we could easily train and evaluate several models to benchmark off-policy evaluation. These models were trained with fully off-policy RL, then evaluated with off-policy evaluation. We found that in our robotics tasks, a variant of the OPC called the SoftOPC performed best at predicting final success rate.

An experiment in the simulated grasping task. The red curve is the dimensionless SoftOPC score over the course of training, evaluated from old data. The blue curve is the grasp success rate in simulation. We see the SoftOPC on old data correlates well with grasp success of the model within our simulator.

After success in sim, we then tried SoftOPC in the real-world task. We took 15 models, trained to have varying degrees of robustness to the gap between simulation and reality. Of these models, 7 of them were trained purely in simulation, and the rest were trained on mixes of simulated and real-world data. For each model, we evaluated the SoftOPC on off-policy real-world data, then the real-world grasp success, to see how well SoftOPC predicted performance of that model. We found that on real data, the SoftOPC does produce scores that correlate with true grasp success, letting us rank sim-to-real techniques using past real experience.

SoftOPC score and true performance for 3 different sim-to-real methods: a baseline simulation, a simulation with random textures and lighting, and a model trained with RCAN. All three models are trained with no real data, then evaluated with off-policy evaluation on a validation set of real data. The ordering of the SoftOPC score matches the order of real grasp success.

Below is a scatterplot of the full results from all 15 models. Each point represents the off-policy evaluation score and real-world grasp success of each model. We compare different scoring functions by their correlation to final grasp success. The SoftOPC does not correlate perfectly with true grasp success, but its scores are significantly more reliable than baseline approaches like the temporal-difference error (the standard Q-learning loss).

Results from our sim-to-real evaluation experiment. On the left is a baseline, the temporal difference error of the model. On the right is one of our proposed methods, the SoftOPC. The shaded region is a 95% confidence interval. The correlation is significantly better with SoftOPC.

Future Work
One promising direction for future work is to see if we can relax our assumptions about the task, to support tasks where dynamics are more noisy, or where we get partial credit for almost succeeding. However, even with our included assumptions, we think the results are promising enough to be applied to many real-world RL problems.

This research was conducted by Alex Irpan, Kanishka Rao, Konstantinos Bousmalis, Chris Harris, Julian Ibarz and Sergey Levine. We’d like to thank Razvan Pascanu, Dale Schuurmans, George Tucker and Paul Wohlhart for valuable discussions. A preprint is available on arXiv.

Continua a leggere

Pubblicato in Senza categoria

Google at CVPR 2019

Posted by Andrew Helton, Editor, Google AI Communications

This week, Long Beach, CA hosts the 2019 Conference on Computer Vision and Pattern Recognition (CVPR 2019), the premier annual computer vision event comprising the main conference and several co-located workshops and tutorials. As a leader in computer vision research and a Platinum Sponsor, Google will have a strong presence at CVPR 2019—over 250 Googlers will be in attendance to present papers and invited talks at the conference, and to organize and participate in multiple workshops.

If you are attending CVPR this year, please stop by our booth and chat with our researchers who are actively pursuing the next generation of intelligent systems that utilize the latest machine learning techniques applied to various areas of machine perception. Our researchers will also be available to talk about and demo several recent efforts, including the technology behind predicting pedestrian motion, the Open Images V5 dataset and much more.

You can learn more about our research being presented at CVPR 2019 in the list below (Google affiliations highlighted in blue)

Area Chairs include:
Jonathan T. Barron, William T. Freeman, Ce Liu, Michael Ryoo, Noah Snavely

Oral Presentations
Relational Action Forecasting
Chen Sun, Abhinav Shrivastava, Carl Vondrick, Rahul Sukthankar, Kevin Murphy, Cordelia Schmid

Pushing the Boundaries of View Extrapolation With Multiplane Images
Pratul P. Srinivasan, Richard Tucker, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng, Noah Snavely

Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation
Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L. Yuille, Li Fei-Fei

AutoAugment: Learning Augmentation Strategies From Data
Ekin D. Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, Quoc V. Le

DeepView: View Synthesis With Learned Gradient Descent
John Flynn, Michael Broxton, Paul Debevec, Matthew DuVall, Graham Fyffe, Ryan Overbeck, Noah Snavely, Richard Tucker

Normalized Object Coordinate Space for Category-Level 6D Object Pose and Size Estimation
He Wang, Srinath Sridhar, Jingwei Huang, Julien Valentin, Shuran Song, Leonidas J. Guibas

Do Better ImageNet Models Transfer Better?
Simon Kornblith, Jonathon Shlens, Quoc V. Le

TextureNet: Consistent Local Parametrizations for Learning From High-Resolution Signals on Meshes
Jingwei Huang, Haotian Zhang, Li Yi, Thomas Funkhouser, Matthias Niessner, Leonidas J. Guibas

Diverse Generation for Multi-Agent Sports Games
Raymond A. Yeh, Alexander G. Schwing, Jonathan Huang, Kevin Murphy

Occupancy Networks: Learning 3D Reconstruction in Function Space
Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, Andreas Geiger

A General and Adaptive Robust Loss Function
Jonathan T. Barron

Learning the Depths of Moving People by Watching Frozen People
Zhengqi Li, Tali Dekel, Forrester Cole, Richard Tucker, Noah Snavely, Ce Liu, William T. Freeman
(CVPR 2019 Best Paper Honorable Mention)

Composing Text and Image for Image Retrieval – an Empirical Odyssey
Nam Vo, Lu Jiang, Chen Sun, Kevin Murphy, Li-Jia Li, Li Fei-Fei, James Hays

Learning to Synthesize Motion Blur
Tim Brooks, Jonathan T. Barron

Neural Rerendering in the Wild
Moustafa Meshry, Dan B. Goldman, Sameh Khamis, Hugues Hoppe, Rohit Pandey, Noah Snavely, Ricardo Martin-Brualla

Neural Illumination: Lighting Prediction for Indoor Environments
Shuran Song, Thomas Funkhouser

Unprocessing Images for Learned Raw Denoising
Tim Brooks, Ben Mildenhall, Tianfan Xue, Jiawen Chen, Dillon Sharlet, Jonathan T. Barron

Co-Occurrent Features in Semantic Segmentation
Hang Zhang, Han Zhang, Chenguang Wang, Junyuan Xie

CrDoCo: Pixel-Level Domain Transfer With Cross-Domain Consistency
Yun-Chun Chen, Yen-Yu Lin, Ming-Hsuan Yang, Jia-Bin Huang

Im2Pencil: Controllable Pencil Illustration From Photographs
Yijun Li, Chen Fang, Aaron Hertzmann, Eli Shechtman, Ming-Hsuan Yang

Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis
Qi Mao, Hsin-Ying Lee, Hung-Yu Tseng, Siwei Ma, Ming-Hsuan Yang

Revisiting Self-Supervised Visual Representation Learning
Alexander Kolesnikov, Xiaohua Zhai, Lucas Beyer

Scene Graph Generation With External Knowledge and Image Reconstruction
Jiuxiang Gu, Handong Zhao, Zhe Lin, Sheng Li, Jianfei Cai, Mingyang Ling

Scene Memory Transformer for Embodied Agents in Long-Horizon Tasks
Kuan Fang, Alexander Toshev, Li Fei-Fei, Silvio Savarese

Spatially Variant Linear Representation Models for Joint Filtering
Jinshan Pan, Jiangxin Dong, Jimmy S. Ren, Liang Lin, Jinhui Tang, Ming-Hsuan Yang

Target-Aware Deep Tracking
Xin Li, Chao Ma, Baoyuan Wu, Zhenyu He, Ming-Hsuan Yang

Temporal Cycle-Consistency Learning
Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, Andrew Zisserman

Depth-Aware Video Frame Interpolation
Wenbo Bao, Wei-Sheng Lai, Chao Ma, Xiaoyun Zhang, Zhiyong Gao, Ming-Hsuan Yang

MnasNet: Platform-Aware Neural Architecture Search for Mobile
Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, Quoc V. Le

A Compact Embedding for Facial Expression Similarity
Raviteja Vemulapalli, Aseem Agarwala

Contrastive Adaptation Network for Unsupervised Domain Adaptation
Guoliang Kang, Lu Jiang, Yi Yang, Alexander G. Hauptmann

DeepLight: Learning Illumination for Unconstrained Mobile Mixed Reality
Chloe LeGendre, Wan-Chun Ma, Graham Fyffe, John Flynn, Laurent Charbonnel, Jay Busch, Paul Debevec

Detect-To-Retrieve: Efficient Regional Aggregation for Image Search
Marvin Teichmann, Andre Araujo, Menglong Zhu, Jack Sim

Fast Object Class Labelling via Speech
Michael Gygli, Vittorio Ferrari

Learning Independent Object Motion From Unlabelled Stereoscopic Videos
Zhe Cao, Abhishek Kar, Christian Hane, Jitendra Malik

Peeking Into the Future: Predicting Future Person Activities and Locations in Videos
Junwei Liang, Lu Jiang, Juan Carlos Niebles, Alexander G. Hauptmann, Li Fei-Fei

SpotTune: Transfer Learning Through Adaptive Fine-Tuning
Yunhui Guo, Honghui Shi, Abhishek Kumar, Kristen Grauman, Tajana Rosing, Rogerio Feris

NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection
Golnaz Ghiasi, Tsung-Yi Lin, Quoc V. Le

Class-Balanced Loss Based on Effective Number of Samples
Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, Serge Belongie

FEELVOS: Fast End-To-End Embedding Learning for Video Object Segmentation
Paul Voigtlaender, Yuning Chai, Florian Schroff, Hartwig Adam, Bastian Leibe, Liang-Chieh Chen

Inserting Videos Into Videos
Donghoon Lee, Tomas Pfister, Ming-Hsuan Yang

Volumetric Capture of Humans With a Single RGBD Camera via Semi-Parametric Learning
Rohit Pandey, Anastasia Tkach, Shuoran Yang, Pavel Pidlypenskyi, Jonathan Taylor, Ricardo Martin-Brualla, Andrea Tagliasacchi, George Papandreou, Philip Davidson, Cem Keskin, Shahram Izadi, Sean Fanello

You Look Twice: GaterNet for Dynamic Filter Selection in CNNs
Zhourong Chen, Yang Li, Samy Bengio, Si Si

Interactive Full Image Segmentation by Considering All Regions Jointly
Eirikur Agustsson, Jasper R. R. Uijlings, Vittorio Ferrari

Large-Scale Interactive Object Segmentation With Human Annotators
Rodrigo Benenson, Stefan Popov, Vittorio Ferrari

Self-Supervised GANs via Auxiliary Rotation Loss
Ting Chen, Xiaohua Zhai, Marvin Ritter, Mario Lučić, Neil Houlsby

Sim-To-Real via Sim-To-Sim: Data-Efficient Robotic Grasping via Randomized-To-Canonical Adaptation Networks
Stephen James, Paul Wohlhart, Mrinal Kalakrishnan, Dmitry Kalashnikov, Alex Irpan, Julian Ibarz, Sergey Levine, Raia Hadsell, Konstantinos Bousmalis

Using Unknown Occluders to Recover Hidden Scenes
Adam B. Yedidia, Manel Baradad, Christos Thrampoulidis, William T. Freeman, Gregory W. Wornell

Computer Vision for Global Challenges
Organizers include: Timnit Gebru, Ernest Mwebaze, John Quinn

Deep Vision 2019
Invited speakers include: Pierre Sermanet, Chris Bregler

Landmark Recognition
Organizers include: Andre Araujo, Bingyi Cao, Jack Sim, Tobias Weyand

Image Matching: Local Features and Beyond
Organizers include: Eduard Trulls

3D-WiDGET: Deep GEneraTive Models for 3D Understanding
Invited speakers include: Julien Valentin

Fine-Grained Visual Categorization
Organizers include: Christine Kaeser-Chen
Advisory panel includes: Hartwig Adam

Low-Power Image Recognition Challenge (LPIRC)
Organizers include: Aakanksha Chowdhery, Achille Brighton, Alec Go, Andrew Howard, Bo Chen, Jaeyoun Kim, Jeff Gilbert

New Trends in Image Restoration and Enhancement Workshop and Associated Challenges
Program chairs include: Vivek Kwatra, Peyman Milanfar, Sebastian Nowozin, George Toderici, Ming-Hsuan Yang

Spatio-temporal Action Recognition (AVA) @ ActivityNet Challenge
Organizers include: David Ross, Sourish Chaudhuri, Radhika Marvin, Arkadiusz Stopczynski, Joseph Roth, Caroline Pantofaru, Chen Sun, Cordelia Schmid

Third Workshop on Computer Vision for AR/VR
Organizers include: Sofien Bouaziz, Serge Belongie

DAVIS Challenge on Video Object Segmentation
Organizers include: Jordi Pont-Tuset, Alberto Montes

Efficient Deep Learning for Computer Vision
Invited speakers include: Andrew Howard

Fairness Accountability Transparency and Ethics in Computer Vision
Organizers include: Timnit Gebru, Margaret Mitchell

Precognition Seeing through the Future
Organizers include: Utsav Prabhu

Workshop and Challenge on Learned Image Compression
Organizers include: George Toderici, Michele Covell, Johannes Ballé, Eirikur Agustsson, Nick Johnston

When Blockchain Meets Computer Vision & AI
Invited speakers include: Chris Bregler

Applications of Computer Vision and Pattern Recognition to Media Forensics
Organizers include: Paul Natsev, Christoph Bregler

Towards Relightable Volumetric Performance Capture of Humans
Organizers include: Sean Fanello, Christoph Rhemann, Graham Fyffe, Jonathan Taylor, Sofien Bouaziz, Paul Debevec, Shahram Izadi

Learning Representations via Graph-structured Networks
Organizers include: Ming-Hsuan Yang

Continua a leggere

Pubblicato in Senza categoria