Categories
Ξ TREND

The network showed the first images of the canceled MMO game based on Marvel comics


Recently, Swedish company Enad Global 7 announced the cancellation of an untitled MMO game featuring characters from Marvel Comics that was announced last fall. Recently, interface designer Ramiro Galan shared the official concept drawing of the canceled project.

In the pictures you can see the character customization menu. At the start of the game, players had to choose their hero’s gender, name, face, body type, costume and superpowers and join one of four factions (X-Men, Fantastic Four, SHIELD and The Avengers).

Judging by the art, the developers planned to create cartoon-like graphics with rich and bright colors. As Ramiro himself notes, the game was stylistically similar to the cartoon Spider-Man: Into the Spider-Verse.

It was previously announced that Microsoft could release a game about Spider-Man instead of Sony, but immediately rejected the offer. Marvel also showed how it improved the quality of CGI graphics in the She-Hulk series. One enthusiast showed how to create mods – he turned a God of War bakery into a Simpsons game

  • A fan showed Tom Holland’s Spider-Man as a mutant with six arms. This is what this character looked like in the 90s cartoon
  • The God of War artist reveals his vision of one of Marvel’s most powerful supervillains and the X-Men’s worst enemy
  • New photos from the set of “Guardians of the Galaxy 3” prove that the film deserves entry into the “Guinness Book of Records”.

Treasure

Categories
Ξ TREND

Meta launches an artificial intelligence that can create images using “common sense”


Meta introduced an AI that it says can fill in images more accurately than other tools on the market.

Meta presented an AI on Tuesday with the ability to complete images using “common sense”. The company explained that its model doesn’t compare pixels like other available tools do.but can understand abstract representations from “prior knowledge about the world». In this way, the company said, it can fill in raw images more accurately than other tools on the market.

He baptized him as Image Joint Embedding Predictive Architecture: Predictive Image Joint-Embedding Architecture (I-JEPA). The model is based on the vision of Meta’s chief AI scientist, Yann LeCun. His idea “is to create machines that can learn internal models of how the world works”published the company on its blog.

Systems like ChatGPT are trained under what is known as the “supervised” learning method. That is, from a large set of labeled data. I-JEPA, instead of labeled data, directly analyzed images or sounds, explained Meta, the parent company of Facebook, Instagram and WhatsApp. They call this other method “self-supervised” learning.

If we show some pictures of cows to young children, they will eventually be able to recognize any cow they see. In the same way, I-JEPA can identify representations through comparisons.

Meta posted an example of how its AI was able to fill in images of various animals and a landscape. The model was able to “semantically” recognize which parts were missing thanks to the context: the head of the dog or the leg of the bird, for example.

“Human and non-human animals appear capable of learning enormous amounts of prior knowledge about how the world works through observation and through an incomprehensibly small number of interactions in an unsupervised and task-independent manner,” LeCun explained in a journal publication. February 2022. It is worth proposing the hypothesis, he said then, that this accumulated knowledge can “form the basis of what is usually called common sense».

This “common sense” is what would guide AI models to know what is probable, what is possible and what is impossible. For this reason, Meta says, I-JEPA would not make errors that are common in images generated by other AIs, such as hands with more than five fingers.

AI systems based on labeled datasets (such as ChatGPT) are often very good at the specific tasks they were trained to do. “But it is impossible to label everything in the world”, Meta explained in another report of his 2021 investigations.

There are also some tasks for which there simply isn’t enough labeled data. If AI systems can gain a deeper understanding of reality beyond their training, “They will be more useful and ultimately bring AI closer to human-level intelligence.”

Achieving “common sense” would be like reaching the dark matter of AI, had explained Meta in 2021. The company believes that this type of AI can learn much faster, plan how to perform complex tasks, and easily adapt to unfamiliar situations.

After insisting for a long time with the Metaverse, now Meta has begun to draw more attention to its AI developments. In May it launched the AI ​​Sandbox, a “testing ground” for early versions of AI-powered advertising tools. For now, the tests are focused on the Text writing, background generation and image overlay.

Those from Menlo Park have also presented LLaMa, their great language model, and SAM, an AI capable of recognizing elements and meanings within an image. In addition, Mark Zuckerberg, CEO of the company, said that they plan to develop a virtual assistant focused on improving the social life of its users.

I-JEPA, like the other developments announced by Meta, is currently designed to be tested by the scientific community and not by the general public. This has been the great hallmark of Meta compared to the competition.

Technological and scientific news in 2 minutes

Receive our newsletter every morning in your email. A guide to understand in two minutes the keys to what is really important in relation to technology, science and digital culture.

Categories
Ξ TREND

Meta presents an AI tool capable of generating images or texts imitating human reasoning


Meta has presented a new tool that allows the generation of images and texts with artificial intelligence (AI) using the prediction of certain parts of the content and imitating human reasoning.

The company has explained that this solution is the result of an idea by Meta’s chief AI scientist, Yann LeCun, who proposed “a new architecture aimed at overcoming the main limitations of the most advanced AI systems,” as stated in a statement. .

The result of their work is the Image Joint Embedding Predictive Architecture (I-JEPA), a tool that collects data from the outside world, creates an internal model of it, and compares abstract representations of images, instead of comparing the pixels themselves.

The company has recalled that humans “learn an enormous amount of prior knowledge about the world by passively observing it”, an aspect that it considers “key to enabling intelligent behavior”.

For this reason, the objective of this model is to predict the representation of a part of a content, such as an image or a text, based on the context offered by other parts of the composition.

Once I-JEPA collects all this information, it is in charge of predicting the missing pixels of an image or the words that do not appear in a certain text, to give it a natural and realistic meaning.

Meta has also commented that, unlike other generative AIs, theirs uses “abstract prediction goals” for which unnecessary detail is removed at the pixel level, allowing the model to learn additional semantic features.

The company has finally indicated that it continues to work on expanding the focus of this tool so that it learns “more general” models based on more specific modalities. For example, allowing spatial and temporal predictions about future events with a video from a simple context.

Categories
Ξ TREND

Instax Square SQ40: New Fuji instant camera with larger square images


A new instant camera from Fujifilm fuels the analog trend. The latest model shoots square again.

Instax celebrates 25th anniversary

Fujifilm expands its product range with the introduction of the new Instax Square SQ40. The analog instant camera in a classic design and high-quality leather look in black focuses on ease of use and the production of square instant photos. Incidentally, with the Fuji SQ10, the manufacturer presented its first digital instant camera including a microSD card slot in 2017. The SQ40 is therefore not to be understood as a further development of this.

Introduced in Japan in 1998, the Instax series, now celebrating its 25th anniversary, has established itself in over 100 countries worldwide. The square format of 62 x 62 millimeters should produce better images by capturing the subject and more of the environment.

Automatic exposure function

The SQ40 has an automatic exposure function that is activated by simply rotating the lens, promising optimal exposure settings for every scene. Rotating the lens again activates the selfie mode, which is also optimized for close-up shots (between 30 and 50 centimeters of the closest focusing distance). A small mirror next to the lens is intended to ensure that you don’t completely miss the mark with selfies.

New film format “Sunset”

In addition to the SQ40, Fujifilm has launched the corresponding square film format “Sunset”, which is said to differ from the others with its grain and gradations around the edge of the picture. A camera case specially designed for the SQ40 in a matching color and texture will also be available.

Fujifilm has announced that the SQ40 will not be the last Instax camera – but in my opinion they will have to come up with something really new again soon so that a new purchase for instant print fans is worthwhile at all.

Can be pre-ordered for €160

You can pre-order the Fujifilm Instax SQ40 for around 160 euros on Amazon, for example, and deliveries are scheduled to begin on June 29th. You can get suitable square films from Fuji from 10 euros for 10 pieces. You should be able to take 30 photos with two CR2 batteries. More information is available on the official website.

Categories
Ξ TREND

The Samsung Galaxy Z Fold 6 reveals its design in the first leaked images


After a fifth generation of the foldable Galaxy that transforms into a tablet, the Galaxy Z Fold5, which we classify as “the best foldable” thanks to a couple of changes compared to its predecessor, Samsung continues working to prepare the next model.

The round of leaks and rumors has already begun, which lets us know that there will be a cheaper version of the future Galaxy Z Fold6. Now, in a new leak, the first images that clearly show its design and reveal some improvements. This is what the Korean manufacturer’s next foldable looks like.

A thinner Galaxy Z Fold6: just what you need to avoid comparisons with the competition

On this occasion, the information comes from the Pigtou portal, where we can see the next folding phone from the best-known galaxy on the Android scene. The Galaxy Z Fold6 will take advantage of a patent registered by Samsung itself in the United States.

Thus, this patent envisions a thinner design, which will also be wider compared to its predecessor. Therefore, it is expected that both the external and internal panels have a greater diagonal.

In addition, workers in the sector affirm that the Galaxy Z Fold6 will not only be thinner, but on the other hand, your screens will have a different aspect ratio to what is usual in recent generations.

Because yes, the leaks also point to a expanding your external screenone of the main criticisms it has received compared to rivals such as the Oppo Find N2 Flip.

Considering that Samsung foldables are still some of the thickest on the market, this leak totally fits with the strategy that the manufacturer would have taken. Furthermore, it has been speculated that The Korean firm has been studying competing devices.

Now, what is special about the design that we find in the registered patent? Well, it seeks to face one of the challenges that foldables have today: find the balance between design and functionality. According to the patent, a new hinge module will be introduced, which will not compromise its durability but would guarantee a less pronounced thickness.

For now this is all we know, without this information being official. Only time will tell if Samsung’s next foldable ‘Fold’ is thinner and if the diagonal of its exterior panel increases, something that users expect. Until the announcement of a ‘Galaxy Unpacked’ which should arrive next summer, we will remain attentive to the movements of the Korean giant.

Categories
Ξ TREND

Facebook is filling up with stolen images created with AI. Their users believe they are real

Facebook is still an immense social network, with many active users, although it is not what it was. Instagram cannibalized its territory and those who continue to enter Facebook with the same frequency are becoming fewer and fewer and belong to quite specific demographics.

They are witnesses and participants in what is happening on this social network: images created by AI, often directly stolen, They are being passed off as authentic by the classic accounts that only spread viral content to shoot your .

viral recreations

If you are one of those who still logs into Facebook, even from time to time, you may have seen a photo like the following: a man posing in a sawmill next to the wooden sculpture of a German shepherd. Although sometimes he is a bulldog, or sometimes the person posing is a woman. Sometimes the mutt has a hyper-realistic style, and other times it is more polygonal.

In reality they are variations of the original content of Michael Jones, a British sculptor who often shares his wood carving work. This man published a series of photographs and videos for a few months about the process of carving the figure of a German shepherd. Also dogs of other breeds.

From there, they have generated dozens, perhaps hundreds, of variations using AI by this type of pages designed to make content viral inspiring or tear-jerking. You place the carving next to a person who has nothing to do with Jones, posing with it, and you are ready to receive and praise. Sometimes the size also receives modifications.

“Your work is incredible”, “beautiful”, “Very well done!”, “Formidable work” and other similar phrases are the most common ones found in this type of images. There is one that accumulates more than a million likes. It was uploaded by a page called ‘Dogs 4 Life’, and its level of cunning is such that it has filled the comments box with its own comments, linking to websites where it has commercial interests, to derive visits and camouflage the real comments.

In some cases it is more evident than in others that it is an image created with AI, not a real photograph. Like in this one, both because of the man’s face and because of how extremely detailed a dog is that is supposed to be a wood carving. Not even Corradini could achieve those textures.

The comments usually look like this, completely positive and laudatory:

interviewed Michael Jones in this regard, who is naturally displeased with this phenomenon, as he believes that “they are missing out on legitimate credit exposure to their work” and that this sets unrealistic expectations for this type of art: the more people who are able If you do so, the less value it has.

The image with Jones’s original carving.

There are more examples, such as an image on a page called “Happy Day” that claims to have created a carved wood sculpture “with its own hands.” The more one looks at the base of the sculpture, the more one begins to see the seams.

And once you start looking, you discover that it is a pest. Apparently half the planet is woodcarving with their own hands at a level of legend. Counterfeiters have gone far beyond the German Shepherd.

Another obsession is in children or adolescents who paint their self-portraits. There are several notable models, but none as popular as that of a blonde teenager holding her own painting with grass and trees in the background.

Apparently this is the original image:

Another example that has transcended the model of a young blonde girl but is still nothing more than a crude recreation that passes itself off as authentic… successfully, according to its comments.

There’s a New Zealand woman named Catherine Hall who has taken it to the next level. detect these types of images: it tracks them and records them in a spreadsheet. He has several, they are public and in the dozens of rows he writes down the details about them to keep track. From the portraits of a teenager that have been reused ad nauseam with more or less subtle modifications. Not with one that brings an idea from the text to the image, but rather modifications from the image.

One of Hall’s spreadsheets.

Another of Hall’s spreadsheets.

And another one.

collects other examples that show how the creation and modification of images by AI, which increasingly achieves more photorealistic results, is being increasingly used by Facebook pages that seek interactions that they can then monetize.

Using inspirational or beautiful images to harvest and then exploit commercially is a relatively harmless use, beyond the annoyance it may cause to the actual artists who created what you see in the original image.

The problem may be greater if they begin to become popular on Facebook, or on another platform. false, photorealistic images, to try to discredit public figures, delegitimize politicians or use celebrities as a hook to sell fraudulent services.

The latter is already happening, also in Spain. If what we can see with viral images is achieving so much success, why wouldn’t a much more malicious and harmful use of this type of images be successful?

Categories
Ξ TREND

The new neural network generates images 8 times faster than its counterpart from OpenAI

South Korean experts reported the development of a new artificial intelligence tool capable of generating an image in 1.5-2 seconds based on a given text description of the user. This tool does not require any specialized or expensive equipment to operate.

When creating the tool, the developers used a special technique – knowledge distillation, which was necessary to compress the size of the open source image generation model, Stable Diffusion XL. This model has about 2.5 billion parameters or variables that the neural network uses for training.

The simplest version of the new artificial intelligence model, called KOALA, has 700 million parameters. It is noted that this is a fairly “compact” neural network that works quickly and without the need to use energy-intensive and expensive equipment.

This type of tool can run on low-cost, commonly available GPUs and requires 8GB of RAM to handle all user requests.

During testing, the KOALA neural network was able to create images based on a simple prompt (“a picture of an astronaut reading a book under the moon on Mars”) in about 1.6 seconds. According to the official description, DALL·E 2 from OpenAI will spend 12.3 seconds on a similar task, and DALL·E 3 – 13.7 seconds.

South Korean specialists presented the results of their work in an article (PDF) on the arXiv service. Their project is currently available through the open source artificial intelligence repository Hugging Face.