Waiting To See What Happens

It’s the end of the month and I truly expect some big acquisition news to break… but then again I have felt that way for a while, so we’ll see if my senses are accurate or not.  Some times deals are there; they look perfect on the surface yet they fall apart.  I had heard last summer about a pretty major deal, things were moving fast, the buyer had a team working “around the clock” on it and so on.  In the end though for a variety of reasons nothing happened- no deal.  But this time I think we’ll see some action- whether its this week or sometime in the next month as I see at least 3 deals close to the finish line.  I will say the whole angle with researching things like this was a ton easier in 2007-2008 when I broke a few of them on here… those were more simple times for this sort of stuff for sure!


–  The past AAMA event looked like it had some excellent content.  One of the recaps I read included the discussion of LCA and EPD’s.  That is an area where we as an industry still don’t have a great grasp of it- the energy committee at GANA has done an admirable job of pushing the importance of it.  Efforts like those (from people like Mark Silverberg, and Helen Sanders) and also having it in an event like AAMA’s surely helps.  In my opinion, cost and time to achieve this information is surely a scary proposition for many at this point, but it sure looks like demand for these assessments is not going away.

–  For my friends in Southern California… any insight on “Measure S?”  According to this articleit will hurt commercial building and development.  But I am curious from the folks on the ground and doing business every day out there what your thoughts are.  And or course with any ballot measure there’s usually as my brother Steve would say  “3 sides to the story” with each side taking a point and the truth laying somewhere in the middle.
–  If you ever watched the excellent documentary “The Two Escobars” or more recently the Netflix series “Narcos” than you are familiar with Pablo Escobar.  But are you aware that his son is actually a very respected architect?  He credits the profession for saving his life.   Good interview with him here.
–  The latest Architectural Billings Index is out and it starts 2017 in negative territory with a score of 49.5.  (Slightly below the break even of 50.)  However the new project score was a smoking 60 up from 57.6 last month.  Overall the positive vibes continue and the analysts who monitor this still feel pretty good.  If there is a worry its “real time” conditions are a bit soft right now but that is the adventure of the start of the year where weather, budgets, and holiday hangovers wreak havoc with schedules.

–  Last this week- glazier certification is back in the discussion.  AMS- the group that the industry uses for IGCC, SGCC, NACC certification and more held a summit in Las Vegas with the Finishing Trade Institute to begin dialogue on individual glazier certification.  There’s a lot of passion for this process from many different areas of our world and it finally getting some movement is exciting.  But there’s no question this is in its infancy and there is a long road to go.


I’ve had to use this foam before- crazy to get it confused with hair mousse.  Then again I haven’t had hair for a long long time….
This made me think of BEC’s Keynote Speaker Mark Eaton.  Tall guy with a creative approach.
How to sell 26,000 boxes of Girl Scout Cookies?  Networking and honest reviews. (And some people with lots of cash obviously….)

Drones are all the rage these days but there’s a group of Tigers that surely do not like them!

Continua a leggere

Pubblicato in Senza categoria

Google Research Awards 2016

Posted by Maggie Johnson, Director of Education and University Relations, Google

We’ve just completed another round of the Google Research Awards, our annual open call for proposals on computer science and related topics including machine learning, machine perception, natural language processing, and security. Our grants cover tuition for a graduate student and provide both faculty and students the opportunity to work directly with Google researchers and engineers.

This round we received 876 proposals covering 44 countries and over 300 universities. After expert reviews and committee discussions, we decided to fund 143 projects. Here are a few observations from this round:

Congratulations to the well-deserving recipients of this round’s awards. If you are interested in applying for the next round (deadline is September 30th), please visit our website for more information.

Continua a leggere

Pubblicato in Senza categoria

Preprocessing for Machine Learning with tf.Transform

Posted by Kester Tong, David Soergel, and Gus Katsiapis, Software Engineers

When applying machine learning to real world datasets, a lot of effort is required to preprocess data into a format suitable for standard machine learning models, such as neural networks. This preprocessing takes a variety of forms, from converting between formats, to tokenizing and stemming text and forming vocabularies, to performing a variety of numerical operations such as normalization.

Today we are announcing tf.Transform, a library for TensorFlow that allows users to define preprocessing pipelines and run these using large scale data processing frameworks, while also exporting the pipeline in a way that can be run as part of a TensorFlow graph. Users define a pipeline by composing modular Python functions, which tf.Transform then executes with Apache Beam, a framework for large-scale, efficient, distributed data processing. Apache Beam pipelines can be run on Google Cloud Dataflow with planned support for running with other frameworks. The TensorFlow graph exported by tf.Transform enables the preprocessing steps to be replicated when the trained model is used to make predictions, such as when serving the model with Tensorflow Serving.

A common problem encountered when running machine learning models in production is “training-serving skew”, where the data seen at serving time differs in some way from the data used to train the model, leading to reduced prediction quality. tf.Transform ensures that no skew can arise during preprocessing, by guaranteeing that the serving-time transformations are exactly the same as those performed at training time, in contrast to when training-time and serving-time preprocessing are implemented separately in two different environments (e.g., Apache Beam and TensorFlow, respectively).

In addition to facilitating preprocessing, tf.Transform allows users to compute summary statistics for their datasets. Understanding the data is very important in every machine learning project, as subtle errors can arise from making wrong assumptions about what the underlying data look like. By making the computation of summary statistics easy and efficient, tf.Transform allows users to check their assumptions about both raw and preprocessed data.

tf.Transform allows users to define a preprocessing pipeline. Users can materialize the preprocessed data for use in TensorFlow training, and also export a tf.Transform graph that encodes the transformations as a TensorFlow graph. This transformation graph can then be incorporated into the model graph used for inference.

We’re excited to be releasing this latest addition to the TensorFlow ecosystem, and we hope users will find it useful for preprocessing and understanding their data.

We wish to thank the following members of the tf.Transform team for their contributions to this project: Clemens Mewald, Robert Bradshaw, Rajiv Bharadwaja, Elmer Garduno, Afshin Rostamizadeh, Neoklis Polyzotis, Abhi Rao, Joe Toth, Neda Mirian, Dinesh Kulkarni, Robbie Haertel, Cyril Bortolato and Slaven Bilac. We also wish to thank the TensorFlow, TensorFlow Serving and Cloud Dataflow teams for their support.

Continua a leggere

Pubblicato in Senza categoria

Headset “Removal” for Virtual and Mixed Reality

Posted by Vivek Kwatra, Research Scientist and Christian Frueh, Avneesh Sud, Software Engineers

Virtual Reality (VR) enables remarkably immersive experiences, offering new ways to view the world and the ability to explore novel environments, both real and imaginary. However, compared to physical reality, sharing these experiences with others can be difficult, as VR headsets make it challenging to create a complete picture of the people participating in the experience.

Some of this disconnect is alleviated by Mixed Reality (MR), a related medium that shares the virtual context of a VR user in a two dimensional video format allowing other viewers to get a feel for the user’s virtual experience. Even though MR facilitates sharing, the headset continues to block facial expressions and eye gaze, presenting a significant hurdle to a fully engaging experience and complete view of the person in VR.

Google Machine Perception researchers, in collaboration with Daydream Labs and YouTube Spaces, have been working on solutions to address this problem wherein we reveal the user’s face by virtually “removing” the headset and create a realistic see-through effect.

VR user captured in front of a green-screen is blended with the virtual environment to generate the MR output: Traditional MR output has the user face occluded, while our result reveals the face. Note how the headset is modified with a marker to aid tracking.

Our approach uses a combination of 3D vision, machine learning and graphics techniques, and is best explained in the context of enhancing Mixed Reality video (also discussed in the Google-VR blog). It consists of three main components:

Dynamic face model capture
The core idea behind our technique is to use a 3D model of the user’s face as a proxy for the hidden face. This proxy is used to synthesize the face in the MR video, thereby creating an impression of the headset being removed. First, we capture a personalized 3D face model for the user with what we call gaze-dependent dynamic appearance. This initial calibration step requires the user to sit in front of a color+depth camera and a monitor, and then track a marker on the monitor with their eyes. We use this one-time calibration procedure — which typically takes less than a minute — to acquire a 3D face model of the user, and learn a database that maps appearance images (or textures) to different eye-gaze directions and blinks. This gaze database (i.e. the face model with textures indexed by eye-gaze) allows us to dynamically change the appearance of the face during synthesis and generate any desired eye-gaze, thus making the synthesized face look natural and alive

On the left, the user’s face is captured by a camera as she tracks a marker on the monitor with her eyes. On the right, we show the dynamic nature of reconstructed 3D face model: by moving or clicking on the mouse, we are able to simulate both apparent eye gaze and blinking.

Calibration and Alignment
Creating a Mixed Reality video requires a specialized setup consisting of an external camera, calibrated and time-synced with the headset. The camera captures a video stream of the VR user in front of a green screen and then composites a cutout of the user with the virtual world to create the final MR video. An important step here is to accurately estimate the calibration (the fixed 3D transformation) between the camera and headset coordinate systems. These calibration techniques typically involve significant manual intervention and are done in multiple steps. We simplify the process by adding a physical marker to the front of the headset and tracking it visually in 3D, which allows us to optimize for the calibration parameters automatically from the VR session.

For headset “removal”, we need to align the 3D face model with the visible portion of the face in the camera stream, so that they would blend seamlessly with each other. A reasonable proxy to this alignment is to place the face model just behind the headset. The calibration described above, coupled with VR headset tracking, provides sufficient information to determine this placement, allowing us to modify the camera stream by rendering the virtual face into it.

Compositing and Rendering
Having tackled the alignment, the last step involves producing a suitable rendering of the 3D face model, consistent with the content in the camera stream. We are able to reproduce the true eye-gaze of the user by combining our dynamic gaze database with an HTC Vive headset that has been modified by SMI to incorporate eye-tracking technology. Images from these eye trackers lack sufficient detail to directly reproduce the occluded face region, but are well suited to provide fine-grained gaze information. Using the live gaze data from the tracker, we synthesize a face proxy that accurately represents the user’s attention and blinks. At run-time, the gaze database, captured in the preprocessing step, is searched for the most appropriate face image corresponding to the query gaze state, while also respecting aesthetic considerations such as temporal smoothness. Additionally, to account for lighting changes between gaze database acquisition and run-time, we apply color correction and feathering, such that the synthesized face region matches with the rest of the face.

Humans are highly sensitive to artifacts on faces, and even small imperfections in synthesis of the occluded face can feel unnatural and distracting, a phenomenon known as the “uncanny valley.” To mitigate this problem, we do not remove the headset completely, instead we have chosen a user experience that conveys a ‘scuba mask effect’ by compositing the color corrected face proxy with a translucent headset. Reminding the viewer of the presence of the headset helps us avoid the uncanny valley, and also makes our algorithm robust to small errors in alignment and color correction.

This modified camera stream, displaying a see-through headset, with the user’s face revealed and their true eye-gaze recreated, is subsequently merged with the virtual environment to create the final MR video.

Results and Extensions
We have used our headset removal technology to enhance Mixed Reality, allowing the medium to not only convey a VR user’s interaction with the virtual environment but also show their face in a natural and convincing fashion. The example below demonstrates our tech applied to an artist using Google Tilt Brush in a virtual environment:

An artist creates 3D art using Google Tilt Brush, shown in Mixed Reality. On the top is the traditional MR result where the face is hidden behind the headset. On the bottom is our result, which reveals the entire face and eyes for a more natural and engaging experience.

While we have shown the potential of our technology, its applications extend beyond Mixed Reality. Headset removal is poised to enhance communication and social interaction in VR itself with diverse applications like VR video conference meetings, multiplayer VR gaming, and exploration with friends and family. Going from an utterly blank headset to being able to see, with photographic realism, the faces of fellow VR users promises to be a significant transition in the VR world, and we are excited to be a part of it.

Continua a leggere

Pubblicato in Senza categoria

Dam Toys History Series 1/6th scale Vietnam War U.S. Marine (Tet Offensive, 1968) 12" figure

The Tet Offensive was one of the largest military campaigns of the Vietnam War, launched on January 30, 1968, by forces of the Viet Cong and North Vietnamese People’s Army of Vietnam against the forces of the South Vietnamese Army of the Republic of Vietnam, the United States Armed Forces, and their allies. It was a campaign of surprise attacks against military and civilian command and control centers throughout South Vietnam. The name of the offensive comes from the Tết holiday, the Vietnamese New Year, when the first major attacks took place.

Though initial attacks stunned both the US and South Vietnamese armies, causing them to temporarily lose control of several cities, they quickly regrouped, beat back the attacks, and inflicted heavy casualties on North Vietnamese forces. In the 1987 British-American war film “Full Metal Jacket” directed and produced by Stanley Kubrick, the storyline follows a platoon of U.S. Marines through their training and the experiences of two of the platoon’s Marines in the Tet Offensive during the Vietnam War. Adam Baldwin stars as “Animal Mother”, an M60 machine gunner. The M60 later served in the Vietnam War as a squad automatic weapon with many U.S. units. Every soldier in the rifle squad would carry an additional 200 linked rounds of ammunition for the M60, a spare barrel, or both. During the Vietnam War, the M60 received the nickname “The Pig” due to its bulky size.

Dam Toys No.78038 History Series 1/6th scale Vietnam War U.S. Marine (Tet Offensive, 1968) 12-inch figure Parts list: Head sculpt, Dam 3.0 Action Body (With Muscle Arm), Hands For Holding Weapon X5, M1 Helmet, Mitchell Pattern Helmet Cover, T-Shirt, 2Nd Pattern OG 107 Pants, Trouser Belt, USMC M1955 Flack Vest, 3Rd Pattern Jungle Boots, M1956 Belt, USMC Jungle First Aid Kit Pouch, M1911A1 Pistol, M1911A1 .45 Mag X2, M1911A1 .45 Mag Pouch, M1916 Leather Pistol Holster, USMC Ka-Bar Knife, Leather Sheath, 1 Quart Canteen X3, M1944 Canteens Pouch X2, M17A1 Gas Mask, Gas Mask Bag, ARVN Rucksack, Nylon Poncho, M18 Smoke Grenade X2, INCHN TH Grenade, M26 Grenade X3, Carabineer, Watch, Insect Repellent, Toothbrush, Weapons Oil, Cigarette Pack X2, Smoking Cigarette, M60 Machine Gun, M60 7.62mm Linked Ammo (Metal), M60 7.62Mm Linked Ammo Belt X3, M60 Cloth, Ammo Bandoleer With Ammo

Scroll down to the rest of the pictures.
Click on them for bigger and better view.

Continua a leggere

Pubblicato in Senza categoria

Review: DC Comics "Batman v Superman: Dawn of Justice" 1:24 scale Metal Die-Cast Batmobile

“To the Batmobile!”

This was part of my haul from December 2016 posted on my toy blog HERE

The Batmobile is a state of the art all-terrain, self-powered, armored fighting motor vehicle used for vehicular hot pursuit, prisoner transportation, anti-tank warfare, riot control, and as a mobile crime lab. Kept in the Batcave, which it accesses through a hidden entrance, the heavily armoured, gadget-laden vehicle is used by Batman in his crime-fighting activities.

The Batmobile made its first appearance in Detective Comics #27 (May, 1939). Then a red sedan, it was simply referred to as “his car”. Soon it began featuring an increasingly prominent bat motif, typically including distinctive wing-shaped tailfins. Armored in the early stages of Batman’s career, it has been customized over time into a sleek armoured / supercar-hybrid, and is the most technologically advanced crime-fighting asset within Batman’s arsenal. Depictions of the vehicle has evolved along with the character, with each incarnation reflecting evolving car technologies. It has appeared in every Batman iteration—from comic books and television to films and video games—and has since gone on to be a part of pop culture.

The Batman v Superman: Dawn of Justice Batmobile is said to be a combined inspiration from both the sleek, streamlined design of classic Batmobiles and the high-suspension, military build from the more recent Tumbler from The Dark Knight Trilogy. Designed by production designer Patrick Tatopoulos and Dennis McCarthy, the Batmobile is about 20 feet long and 12 feet wide. Unlike previous Batmobiles, it has a gatling gun sitting on the front and the back tires are shaved down tractor tires. The Batmobile elevates itself for scenes depicting it going into battle or when performing jumps, and lowers to the ground when cruising through the streets.

Scroll down to see all the pictures.
Click on them for bigger and better view.

Jada Toys released this 1:24 die-cast model kit of the Batmobile from the 2016 movie Batman v Superman: Dawn of Justice. Assembly was pretty straight-forward and hassle-free. It isn’t anything like the type of model kits released by Bandai where one really has to put every bit and piece together. Check out my recent toy blog post / review of the Bandai 1/6th scale Star Wars Stormtrooper Model Kit HERE. Now that’s a model kit.

In the upcoming film Justice League, Batman will have a new Batmobile called the Nightcrawler, which was designed by his father.

Continua a leggere

Pubblicato in Senza categoria

Getting It Going

I had a few discussions this past week about advanced technology in our industry and how it is or isn’t being adopted or grown in the architectural market.  This is a massive frustration for me. I have always been an enthusiastic early adopter of new technology and see the value.  Unfortunately the people that really can control the end results of these new products are completely opposite of me.  What is the answer here?  How do we get more push?   Interestingly if you ask people from outside the industry they’ll blame us- saying we don’t innovate.  But we do.  We have amazing glass products that can hit numbers never seen before and are an active part of the structure.  There’s now framing that allows the glass to actually perform as expected, not decreasing its values thanks to make up.  And there are plenty of other components that help the assembly as a whole soar.  So the products are there- the mass adoption continues to be slow.  What are we missing?


–  Saw a tidbit online that made me feel good… residential building starts in 2016 posted its best year since 2005-2006.  With the commercial industry running a year behind the residential side, this surely shows that the positivity should continue.  Residential starts have grown now for 7 straight years.

–  One area I failed to mention in depth during last weeks BEC recap was the always extremely helpful presentation by Dr. Tom Culp.  I seriously think his presentation should be streamed to the entire industry (hey there’s an idea!) because it absolutely affects all of us.  One word that really stuck for me throughout Tom’s presentation was “daylighting” – that surely seems to be an area of serious focus going forward and obviously our industry has great options for that.  Though you still have to not give in on the energy side, so a happy medium between great daylighting and high performance is a must. 

–  The rocky run for the AIA ontinues.  They are still dealing with the fall out of their post election press release and then they ran into another issue when they laid out their keynote speeches for their upcoming show.  They did not initially include any women in the program.  After a heavy backlash they did add a panel on day 3, but the damage was done.  If you want to get a feel for how some of the membership is feeling, check out the article on the situation and spend some time in the comment section.  Very interesting.
–  For my marketing friends- just a heads up, Twitter is making more changes including hiding some “low quality tweets” during conversations.  One thing that is not clear is how Twitter will determine quality, but if we have learned anything from Google and their programs, the rules will be changing constantly.  Never a dull moment when you are trying to be active in the social and online realm.

–  Last this week… now that I am addicted to Netflix (the ability to download so I can watch while I fly is awesome.) I found actually a work reason to use it.  There’s a new series on there called Abstract: The Art of Design and it’s a documentary series that follows different designers- many of which are major players in the commercial architectural world.  So in between me binging on “House of Cards” I will have some work to watch….


–  I am NOT a believer in Valentines Day at all… but this couple did it right for each other!
–  VERY lucky to survive this gator attack on the golf course
–  The pet squirrel is a hero!

I don’t understand the “Challenge” here but I love Bruno Mars- easily the most talented entertainer out there right now.  So I know there’s no way I could watch his video or listen to his music without singing and dancing….



Continua a leggere

Pubblicato in Senza categoria