spot_img
Monday, December 23, 2024
More
    HomeTechnologiesGoogle AI showed black Nazis

    Google AI showed black Nazis

    -

    Image tool stops
    Google AI showed black Nazis

    This audio version was artificially generated. More info | Send feedback

    For a long time, AI applications almost exclusively created images of white people. Companies want to tackle the problem and bring in more diversity. Google software oversteps the mark. The Internet company draws conclusions – but at the same time makes an important point clear.

    Google is no longer allowing its Gemini AI software to generate images of people after it showed users non-white Nazi soldiers and American settlers. The Internet company admitted that in some cases the depiction did not correspond to the historical context. At the same time, Google fundamentally defended diversity in the generation of AI images: This is a good thing because people around the world use the software. But in this specific case they went overboard.

    In recent years there has often been a problem with stereotypes and discrimination in various applications with artificial intelligence. Facial recognition software, for example, was initially poor at recognizing people with black skin. When AI created images, white people were initially often depicted.

    Developers at other companies are therefore striving for more diversity in different scenarios. Sometimes – as in this case – they get caught between the two fronts: There is a loud movement in the USA in particular, which includes tech billionaire Elon Musk, which denounces alleged racism against white people. Reverse racism is often used in the USA by right-wing extremists against social movements such as “Black Lives Matter”. But even if there are prejudices against white people, science does not see this as racism. The reason is, for example, existing power structures.

    Related articles

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Latest posts