Second Edition of Stable Diffusion Released

The second version of Stable Diffusion, one of the most popular text-to-image artificial intelligence, has been released. Artificial intelligence will now be able to produce images in much higher resolutions.
 Second Edition of Stable Diffusion Released
READING NOW Second Edition of Stable Diffusion Released

Artificial intelligences such as DALL-E 2, Midjourney and Stable Diffusion have brought into our lives the extraordinary opportunity to create the image we want by simply describing it in writing. Of course, as with every newly developing technology, the rate of development of these technologies is at an incredibly high level.

While the release of these technologies, especially as open source, already makes a great contribution to this development, an exciting announcement came from Stable Difusion today. The team behind AI has released the second version of Stable Diffusion.

What does “Stable Diffusion 2.0” offer?

Stable Diffusion 2.0, the new version of artificial intelligence developed by CompVis, will bring the possibility of creating images from text to a whole new dimension. Trained with the LAION-5B dataset, the new model will be able to create higher resolution images, render low resolution images high resolution and offer much more consistent alternatives for images. Let’s take a look at all this in detail.

Much higher resolution images will be available:

With Stable Diffusion 2.0, users will be able to output in 512×512 or 768×768 resolutions. Additionally, all photos created will be free of adult content by going through LAION’s NSFW filter.

Of course, if you get the photo you want, you will be able to increase its resolution to much higher levels:

The new version will now also offer ‘upscale’ at 2048×2048 pixels and higher resolution. You will be able to take the sample you like from the images you create and move it to resolutions much higher than 512×512 pixels.

New improvements have also been made in creating images from images:

With the new model ‘depth2img’, version 2.0 will be able to analyze images much better. In this way, artificial intelligence will be able to create different alternatives of an image you present with a higher meaning bond.

Images can be modified with text!

In addition to image-to-image output, users will now be able to indicate in writing the changes they want in an image they submit. Artificial intelligence will first analyze the image and then apply the changes you present to the text on the image.

To access Stable Diffusion 2.0, click this link to access the AI’s GitHub page.

Comments
Leave a Comment

Details
433 read
okunma1223