Arthur Bernard – Tout est à moi, dit la poussière

Une identité en sursis

***
Arthur Bernard – Tout est à moi, dit la poussière [Champ Vallon, 2016]

Article écrit pour Le Matricule des anges

Ce qui commence comme l’autofiction d’un narrateur/auteur qui se perd en pseudonymes devient la fiction de la vie réelle d’un autre. Qu’importent les rouages de cette vie vraie et fausse, puisque dès le titre la poussière nous fait savoir que tout cela lui appartient. Nul avertissement biblique, mais l’évocation plus prosaïque, plus poétique, des œuvres du temps. Il tisse son oubli et le dépose en couches sur des vieux papiers jaunis. Un temps que l’on peut réécrire, voire écrire tout court, substituant aux éléments qui nous manquent d’autres qui nous plaisent. Retourner la poussière, la rêver, la contredire plutôt que d’y retourner.

C’est bien dans des vieux papiers, ceux d’une justice et d’une France d’un autre temps, qu’Arthur Bernard trouve la matière de son roman ; matière née d’un désir d’homonymies. Un nom qui ressemble au sien, mais évoque aussi Rimbaud et Céline, bref un nom qui fait littérature. Et mythologie même, puisque le fantôme d’Ulysse, de son odyssée, traine ses guêtres tout au long du livre.

Un certain Arthur Ferdinand Bernard. Nom très vite raccourci en AFB, puisqu’ici les appellations sont malléables, toujours. L’identité n’est jamais acquise, mais une construction permanente. Tout comme le récit, fait de conjectures, d’extrapolations, de tâtonnements, de parallélismes. Condamné à mort à 18 ans en 1890 pour tentative de meurtre, la peine d’AFB sera commuée par la grâce d’un président en un séjour à la Nouvelle Calédonie. Un séjour long, voire définitif.

Très vite, l’auteur épuise les quelques documents officiels dont il dispose (procès verbaux couverts d’une graphie toute de fioritures ; signatures de fonctionnaires longues comme le bras). Quelques années seulement après avoir traversé les océans vers sa prison à ciel ouvert, on perd la trace d’AFB. Qu’en est-il ? On peut aller au plus simple, au plus court, au plus pauvre : il est victime de la vie brève des forçats. Mais l’imagination peut aussi faire son travail, celui de ce que l’auteur appelle « l’art-roman », et botter en touche en inventant autre chose que l’évidence (« Tout est possible, puisque j’ignore tout », résume le narrateur). Faire de son personnage – simple objet trouvé dans des replis administratifs obsolète – un négatif rimbaldien, un Ulysse immobile et raconter quarante ans de vie insulaire. Un mythe miniature, sans faits d’arme (« sans pedigree d’héroïsme »). AFB, tout comme le poète, est celui qui disparaît loin, en terres exotiques, à peine sorti de l’adolescence. Mais contrairement à Ulysse, personne ne l’attend plus.

AFB – que l’on nomme également par son matricule, « plus éloquent et précis que les trois prénoms de son nom entier » ; une éloquence où l’identité se dissout pour de bon – vit dans ces pages une vie plus intéressante que celle qu’il a peut-être vécue. Il est au service de familles de militaires, de fonctionnaires expatriés et se fait peu à peu une place, discrète. Un métier l’attend, celui de relieur, vaguement étudié dans sa jeunesse parisienne et populaire. Dans une précaire cabane soumise aux assauts d’un climat hostile (chaleur, humidité, bestioles), il relie en beau volumes Homère et Rimbaud (ses doubles), les protège, les contient.

Les années passent. La guerre, lointaine (pas celle de Troie, mais la grande, de 14 à 18), emporte son seul ami, grand admirateur d’Homère. Avec l’âge, AFB devient à sa façon une figure de l’île. Ainsi, lorsqu’il se prend de passion pour les cerfs-volants suite à la lecture d’un manuel dont on lui a confié la reliure, il y aura du monde pour l’aider à en construire un géant, avec lequel il pourrait bien s’envoler et disparaître, rejoignant le néant de son identité fluctuante.

Continua a leggere

Pubblicato in Senza categoria

Introducing the Open Images Dataset

Posted by Ivan Krasin and Tom Duerig, Software Engineers

In the last few years, advances in machine learning have enabled Computer Vision to progress rapidly, allowing for systems that can automatically caption images to apps that can create natural language replies in response to shared photos. Much of this progress can be attributed to publicly available image datasets, such as ImageNet and COCO for supervised learning, and YFCC100M for unsupervised learning.

Today, we introduce Open Images, a dataset consisting of ~9 million URLs to images that have been annotated with labels spanning over 6000 categories. We tried to make the dataset as practical as possible: the labels cover more real-life entities than the 1000 ImageNet classes, there are enough images to train a deep neural network from scratch and the images are listed as having a Creative Commons Attribution license*.

The image-level annotations have been populated automatically with a vision model similar to Google Cloud Vision API. For the validation set, we had human raters verify these automated labels to find and remove false positives. On average, each image has about 8 labels assigned. Here are some examples:

Annotated images form the Open Images dataset. Left: Ghost Arches by Kevin Krejci. Right: Some Silverware by J B. Both images used under CC BY 2.0 license

We have trained an Inception v3 model based on Open Images annotations alone, and the model is good enough to be used for fine-tuning applications as well as for other things, like DeepDream or artistic style transfer which require a well developed hierarchy of filters. We hope to improve the quality of the annotations in Open Images the coming months, and therefore the quality of models which can be trained.

The dataset is a product of a collaboration between Google, CMU and Cornell universities, and there are a number of research papers built on top of the Open Images dataset in the works. It is our hope that datasets like Open Images and the recently released YouTube-8M will be useful tools for the machine learning community.


* While we tried to identify images that are licensed under a Creative Commons Attribution license, we make no representations or warranties regarding the license status of each image and you should verify the license for each image yourself.

Continua a leggere

Pubblicato in Senza categoria

Hot Toys Star Wars: The Force Awakens 1/6th scale Luke Skywalker 12-inch Collectible Figure

“Luke Skywalker? I thought he was a myth.” – Rey

In the aftermath of the fall of the Empire, Luke Skywalker, the last surviving Jedi, has put himself in exile after his attempt to train a new generation of Jedi went horribly awry. As a new threat to the galaxy known as the First Order emerges, Leia, Han, Chewie, and a group of Resistance heroes risk their lives trying to locate Luke’s whereabouts – with the hope of bringing him back into the fold in their desperate struggle to restore peace and justice to the galaxy.

Today Hot Toys is very excited to officially present the much anticipated 1/6th scale Luke Skywalker Collectible Figure from Star Wars: The Force Awakens! The movie-accurate collectible figure is specially crafted based on the image of Mark Hamill as Luke Skywalker in the film featuring a newly developed head sculpt, specially tailored costume, a mechanical right hand, and a Star Wars-themed figure stand.

When you pre-order now, you can also receive a specially designed diorama figure base as a pre-order bonus accessory!

Scroll down to see all the pictures.
Click on them for bigger and better view.

Hot Toys MMS390 Star Wars: The Force Awakens 1/6th scale Luke Skywalker 12-inch Collectible Figure’s special features: Authentic and detailed likeness of Mark Hamill as Luke Skywalker in Star Wars: The Force Awakens with Movie-accurate facial expression with detailed skin texture | Approximately 28 cm tall Body with over 30 points of articulations | Three (3) pieces of interchangeable mechanical right hand including: One (1) right fist, One (1) relaxed right hand, One (1) gesturing right hand | Three (3) pieces of interchangeable left hands including: One (1) left fist, One (1) relaxed left hand, One (1) gesturing left hand

Costume: One (1) beige-colored Jedi robe, One (1) beige and white over-tunic, One (1) white under-tunic, One (1) leather-like belt with pouch, One (1) pair of pants, One (1) pair of light gray-colored boot

Accessory: Specially designed figure stand with the Resistance’s insignia, Luke Skywalker nameplate and movie logo. Pre-Order Bonus Accessory: Diorama figure base

Related posts:
Review of Hot Toys MMS297 Star Wars Episode IV: A New Hope 1/6th scale Luke Skywalker collectible figure posted on my toy blog HERE, HERE and HERE
Sideshow Collectibles “Star Wars” 1:6 scale Luke Skywalker Red Five X-Wing Pilot figure reviewed HERE and HERE
Comparison between Sideshow 1:6 scale Luke Skywalker and Hot Toys 12-inch Luke Skywalker (pics HERE)

Continua a leggere

Pubblicato in Senza categoria

Hot Toys MMS389 Rogue One: A Star Wars Story 1/6th scale Shoretrooper Collectible Figure

continued from previous toy blog post

The Star Wars galaxy expands in size and scope as the story advances to new planets and locales never before seen on film. This December, Rogue One: A Star Wars Story will introduce us to the secluded and heavily guarded tropical planet Scarif, where Imperial military installations are established and new specialist Stormtroopers known as the Shoretroopers are stationed to patrol the beaches and bunkers of the planetary facility.

Today Hot Toys is excited to announce the 1/6th scale collectible figure of the new Shoretrooper. The movie-accurate collectible figure is specially crafted based on the image of the Shoretrooper in Rogue One: A Star Wars Story, featuring a highly poseable new body with a distinctive helmet, all-new armor with weathering effects, a new blaster rifle, detailed utility belt with pouch and other accessories, plus a specially designed figure stand and backdrop.

Hot Toys MMS389 Rogue One: A Star Wars Story 1/6th scale Shoretrooper Collectible Figure’s special features: Authentic and detailed likeness of Shoretrooper in Rogue One: A Star Wars Story | Approximately 30 cm tall Body with over 30 points of articulations | Six (6) pieces of interchangeable gloved hands including: One (1) pair of fists, One (1) pair of relaxed hands, One (1) pair of hands for holding weapons

Scroll down to see the rest of the pictures.
Click on them for bigger and better view.

Costume: One (1) newly designed and finely crafted Shoretrooper armor with blue breastplate, One (1) pair of brown colored pants, One (1) utility belt with pouch, One (1) pair of armored boots

Weapon: One (1) blaster rifle

Accessory: Specially designed character theme figure stand and backdrop

When you pre-order now, you can also receive a specially designed Shoretrooper theme backdrop! Prepare to Go Rogue as the Shoretrooper will be a unique and indispensable addition to your Imperial army!

Release date: Q4, 2016 – Q1, 2017

Related posts:
Hot Toys Toys ”R” Us Exclusive Rogue One 1:6 scale Jedha Patrol Stormtrooper 12-inch figure (pics HERE)
Hot Toys Star Wars Rogue One 1/6th scale Death Trooper (Specialist) Collectible Figure previewed HERE
Bandai Tamashii SH Figuarts Star Wars Rogue One action figures PREVIEW and more… (pics HERE)
Check out “Rogue One: A Star Wars Story” costumes display at Star Wars Celebration 2016! posted on my toy blog HERE

Continua a leggere

Pubblicato in Senza categoria

Hot Toys Rogue One: A Star Wars Story 1/6th scale Darth Vader 35cm Tall Collectible Figure

Darth Vader returns in force this year with Rogue One: A Star Wars Story! The epic space saga film takes place just before the events of Star Wars: A New Hope, a time when the Galactic Empire is at the peak of its power, whereas the fledgling Rebel Alliance plots to thwart the Empire’s plan to build a moon-sized battle station. As the Emperor’s top enforcer and the symbol of the Empire’s reign of terror, Lord Vader will crush the Rebels by any means necessary.

While fans worldwide are holding their breath to experience the new film, Hot Toys is here to unleash the power of the Dark Side with the new 1/6th scale Darth Vader collectible figure!

The movie-accurate collectible figure is specially crafted featuring a newly developed body and armor, a highly polished new helmet sculpt, sophisticatedly tailored and delicately textured costume, a utility belt with LED lights, a lightsaber hilt.

When you pre-order now, you can also receive a specially designed Death Star theme double-sided backdrop! Search your feelings… you know you want this for your Hot Toys Star Wars Collection! It is time to Go Rogue!

Scroll down to see all the pictures.
Click on them for bigger and better view.

Hot Toys MMS388 Rogue One: A Star Wars Story 1/6th scale Darth Vader Collectible Figure’s special features: Authentic and detailed likeness of Darth Vader in Rogue One: A Star Wars Story | Approximately 35 cm tall New body with over 30 points of articulations | Nine (9) pieces of interchangeable gloved hands including: One (1) pair of fists, One (1) pair of open palms, One (1) pair of resting hands, One (1) pair of hands for holding lightsaber, One (1) gesturing right hand

Costume: One (1) newly designed and finely tailored Darth Vader armor and suit, One (1) black cloak, One (1) system function belt with LED light-up function (white light, battery operated), One (1) pair of armored black boots

Weapon: One (1) lightsaber hilt

Accessory: Specially designed character theme figure stand and double-sided Death Star theme backdrop

Release date: Q4, 2016 – Q1, 2017

Related posts:
Hot Toys Star Wars: Episode IV A New Hope: 1/6th scale Darth Vader 35cm (14-inch) Collectible Figure previewed HERE
Review of Sideshow Collectibles “Star Wars Episode VI: Return of the Jedi” Darth Vader Deluxe 1/6th scale figure posted on my toy blog HERE and HERE
Comparing Sideshow Collectibles 1/6th scale Star Wars ANH and ROTJ Darth Vader action figures (pics HERE)

Continua a leggere

Pubblicato in Senza categoria

Image Compression with Neural Networks

Posted by Nick Johnston and David Minnen, Software Engineers

Data compression is used nearly everywhere on the internet – the videos you watch online, the images you share, the music you listen to, even the blog you’re reading right now. Compression techniques make sharing the content you want quick and efficient. Without data compression, the time and bandwidth costs for getting the information you need, when you need it, would be exorbitant!

In “Full Resolution Image Compression with Recurrent Neural Networks“, we expand on our previous research on data compression using neural networks, exploring whether machine learning can provide better results for image compression like it has for image recognition and text summarization. Furthermore, we are releasing our compression model via TensorFlow so you can experiment with compressing your own images with our network.

We introduce an architecture that uses a new variant of the Gated Recurrent Unit (a type of RNN that allows units to save activations and process sequences) called Residual Gated Recurrent Unit (Residual GRU). Our Residual GRU combines existing GRUs with the residual connections introduced in “Deep Residual Learning for Image Recognition” to achieve significant image quality gains for a given compression rate. Instead of using a DCT to generate a new bit representation like many compression schemes in use today, we train two sets of neural networks – one to create the codes from the image (encoder) and another to create the image from the codes (decoder).

Our system works by iteratively refining a reconstruction of the original image, with both the encoder and decoder using Residual GRU layers so that additional information can pass from one iteration to the next. Each iteration adds more bits to the encoding, which allows for a higher quality reconstruction. Conceptually, the network operates as follows:

  1. The initial residual, R[0], corresponds to the original image I: R[0] = I.
  2. Set i=1 for to the first iteration.
  3. Iteration[i] takes R[i-1] as input and runs the encoder and binarizer to compress the image into B[i].
  4. Iteration[i] runs the decoder on B[i] to generate a reconstructed image P[i].
  5. The residual for Iteration[i] is calculated: R[i] = I – P[i].
  6. Set i=i+1 and go to Step 3 (up to the desired number of iterations).

The residual image represents how different the current version of the compressed image is from the original. This image is then given as input to the network with the goal of removing the compression errors from the next version of the compressed image. The compressed image is now represented by the concatenation of B[1] through B[N]. For larger values of N, the decoder gets more information on how to reduce the errors and generate a higher quality reconstruction of the original image.

To understand how this works, consider the following example of the first two iterations of the image compression network, shown in the figures below. We start with an image of a lighthouse. On the first pass through the network, the original image is given as an input (R[0] = I). P[1] is the reconstructed image. The difference between the original image and encoded image is the residual, R[1], which represents the error in the compression.

Left: Original image, I = R[0]. Center: Reconstructed image, P[1]. Right: the residual, R[1], which represents the error introduced by compression.

On the second pass through the network, R[1] is given as the network’s input (see figure below). A higher quality image P[2] is then created. So how does the system recreate such a good image (P[2], center panel below) from the residual R[1]? Because the model uses recurrent nodes with memory, the network saves information from each iteration that it can use in the next one. It learned something about the original image in Iteration[1] that is used along with R[1] to generate a better P[2] from B[2]. Lastly, a new residual, R[2] (right), is generated by subtracting P[2] from the original image. This time the residual is smaller since there are fewer differences between the reconstructed image, and what we started with.

The second pass through the network. Left: R[1] is given as input. Center: A higher quality reconstruction, P[2]. Right: A smaller residual R[2] is generated by subtracting P[2] from the original image.

At each further iteration, the network gains more information about the errors introduced by compression (which is captured by the residual image). If it can use that information to predict the residuals even a little bit, the result is a better reconstruction. Our models are able to make use of the extra bits up to a point. We see diminishing returns, and at some point the representational power of the network is exhausted.

To demonstrate file size and quality differences, we can take a photo of Vash, a Japanese Chin, and generate two compressed images, one JPEG and one Residual GRU. Both images target a perceptual similarity of 0.9 MS-SSIM, a perceptual quality metric that reaches 1.0 for identical images. The image generated by our learned model results in an file 25% smaller than JPEG.

Left: Original image (1419 KB PNG) at ~1.0 MS-SSIM. Center: JPEG (33 KB) at ~0.9 MS-SSIM. Right: Residual GRU (24 KB) at ~0.9 MS-SSIM. This is 25% smaller for a comparable image quality

Taking a look around his nose and mouth, we see that our method doesn’t have the magenta blocks and noise in the middle of the image as seen in JPEG. This is due to the blocking artifacts produced by JPEG, whereas our compression network works on the entire image at once. However, there’s a tradeoff — in our model the details of the whiskers and texture are lost, but the system shows great promise in reducing artifacts.

Left: Original. Center: JPEG. Right: Residual GRU.

While today’s commonly used codecs perform well, our work shows that using neural networks to compress images results in a compression scheme with higher quality and smaller file sizes. To learn more about the details of our research and a comparison of other recurrent architectures, check out our paper. Our future work will focus on even better compression quality and faster models, so stay tuned!

Continua a leggere

Pubblicato in Senza categoria

Bandai Tamashii Nations “Manga Realization” Steel Samurai Iron Man 7-inch action figure

Bandai has been quite successful with their Tamashii Nations Star Wars Movie Realization line of Samurai-inspired Star Wars action figures, mostly of the armor-clad troopers of the Galactic Empire.

Now they have gone into Manga Realization of Marvel characters reimagined as Samurai warriors. Their first offering was the Manga Realization Samurai Spider-Man action figure which was previewed earlier on my toy blog HERE.

Now Bandai has given us preview pictures of their upcoming next-in-line Tamashii Nations “Manga Realization” Steel Samurai Iron Man action figure, another Marvel figure to the line-up after the recently announced Stormtrooper Samurai Archer action figure from their Star Wars series – check out the pics HERE.

According to Tamashii Nations, this “Manga Realization” Steel Samurai Iron Man action figure will be about 18 cm (or 7.1 inches) tall. Samurai Iron Man action figure will feature “Range of motion that allows for dynamic poses.” It will be made of high-quality PVC and ABS plastics and the figure will come with three sets of interchangeable hands (a pair of fists, a pair of open-palm repulsor blast style hands, and a pair of sword holding hands). In addition, the figure will also be armed with a sword, and jetpack on the back.

Scroll down to see the rest of the pictures.
Click on them for bigger and better view.

Shipping in February 2017

Continua a leggere

Pubblicato in Senza categoria

Announcing YouTube-8M: A Large and Diverse Labeled Video Dataset for Video Understanding Research

Posted by Sudheendra Vijayanarasimhan and Paul Natsev, Software Engineers

Many recent breakthroughs in machine learning and machine perception have come from the availability of large labeled datasets, such as ImageNet, which has millions of images labeled with thousands of classes. Their availability has significantly accelerated research in image understanding, for example on detecting and classifying objects in static images.

Video analysis provides even more information for detecting and recognizing objects, and understanding human actions and interactions with the world. Improving video understanding can lead to better video search and discovery, similarly to how image understanding helped re-imagine the photos experience. However, one of the key bottlenecks for further advancements in this area has been the lack of real-world video datasets with the same scale and diversity as image datasets.

Today, we are excited to announce the release of YouTube-8M, a dataset of 8 million YouTube video URLs (representing over 500,000 hours of video), along with video-level labels from a diverse set of 4800 Knowledge Graph entities. This represents a significant increase in scale and diversity compared to existing video datasets. For example, Sports-1M, the largest existing labeled video dataset we are aware of, has around 1 million YouTube videos and 500 sports-specific classes–YouTube-8M represents nearly an order of magnitude increase in both number of videos and classes.

In order to construct a labeled video dataset of this scale, we needed to address two key challenges: (1) video is much more time-consuming to annotate manually than images, and (2) video is very computationally expensive to process and store. To overcome (1), we turned to YouTube and its video annotation system, which identifies relevant Knowledge Graph topics for all public YouTube videos. While these annotations are machine-generated, they incorporate powerful user engagement signals from millions of users as well as video metadata and content analysis. As a result, the quality of these annotations is sufficiently high to be useful for video understanding research and benchmarking purposes.

To ensure the stability and quality of the labeled video dataset, we used only public videos with more than 1000 views, and we constructed a diverse vocabulary of entities, which are visually observable and sufficiently frequent. The vocabulary construction was a combination of frequency analysis, automated filtering, verification by human raters that the entities are visually observable, and grouping into 24 top-level verticals (more details in our technical report). The figures below depict the dataset browser and the distribution of videos along the top-level verticals, and illustrate the dataset’s scale and diversity.

A dataset explorer allows browsing and searching the full vocabulary of Knowledge Graph entities, grouped in 24 top-level verticals, along with corresponding videos. This screenshot depicts a subset of dataset videos annotated with the entity “Guitar”.
The distribution of videos in the top-level verticals illustrates the scope and diversity of the dataset and reflects the natural distribution of popular YouTube videos.

To address (2), we had to overcome the storage and computational resource bottlenecks that researchers face when working with videos. Pursuing video understanding at YouTube-8M’s scale would normally require a petabyte of video storage and dozens of CPU-years worth of processing. To make the dataset useful to researchers and students with limited computational resources, we pre-processed the videos and extracted frame-level features using a state-of-the-art deep learning model–the publicly available Inception-V3 image annotation model trained on ImageNet. These features are extracted at 1 frame-per-second temporal resolution, from 1.9 billion video frames, and are further compressed to fit on a single commodity hard disk (less than 1.5 TB). This makes it possible to download this dataset and train a baseline TensorFlow model at full scale on a single GPU in less than a day!

We believe this dataset can significantly accelerate research on video understanding as it enables researchers and students without access to big data or big machines to do their research at previously unprecedented scale. We hope this dataset will spur exciting new research on video modeling architectures and representation learning, especially approaches that deal effectively with noisy or incomplete labels, transfer learning and domain adaptation. In fact, we show that pre-training models on this dataset and applying / fine-tuning on other external datasets leads to state of the art performance on them (e.g. ActivityNet, Sports-1M). You can read all about our experiments using this dataset, along with more details on how we constructed it, in our technical report.

Continua a leggere

Pubblicato in Senza categoria