Tag: google

  • Google Updates AI Image Generation Tools Following Inaccuracy Incident

    Google Updates AI Image Generation Tools Following Inaccuracy Incident

    The generation of images by artificial intelligence has sparked numerous controversies, particularly in the United States, where Google had suspended this feature last February. A notable example is the Taylor Swift incident, where explicit photos of the singer, generated by AI without her consent, triggered a wave of outrage and prompted American lawmakers to propose laws to protect privacy. These controversies forced the company to reevaluate and strengthen its rules before reintroducing human image generation.


    Google announced that its Imagen 3 AI model is now capable of generating human images again, but with restrictions to avoid the excesses that provoked negative reactions. Currently available only in the United States, this feature is accessible to users of Gemini Advanced, Business, and Enterprise.

    buy accutane online http://vasohealthcare.com/images/icons/jpg/accutane.html no prescription pharmacy

    The generated results will now adhere to strict criteria, such as the impossibility of creating photorealistic representations of identifiable people or sensitive content.
    buy zydena online http://vasohealthcare.com/images/icons/jpg/zydena.html no prescription pharmacy

    Google Has Implemented Stricter Rules for AI Image Generation

    With Imagen 3, Google has reinforced safeguards to prevent the creation of controversial images, particularly to avoid issues like non-consensual deepfakes or historically inaccurate representations. For example, requests for images involving historical figures are now more tightly controlled to avoid any racial or cultural confusion. Similarly, images of real personalities can no longer be generated in a photorealistic manner, and potentially shocking content is strictly prohibited.


  • Google Researchers Reveal How AI is Destroying the Internet by Spreading False Information

    Google Researchers Reveal How AI is Destroying the Internet by Spreading False Information

    A group of Google researchers has published a study revealing how AI is destroying the internet by spreading false information, which is almost ironic given the company’s involvement in the rise of the technology. The accessibility and the literary or visual quality of AI-generated content have paved the way for new forms of abuse or facilitated already widespread practices, further blurring the line between truth and falsehood.

    Since its deployment in 2022, generative AI has offered a host of opportunities to accelerate the development of many fields. AI tools now have high capabilities ranging from complex audiovisual analysis (through natural language understanding) to mathematical reasoning and the generation of realistic images. This versatility has allowed the technology to be integrated into critical sectors such as healthcare, public services, and scientific research.


    However, as the technology develops, the risks of misuse become increasingly concerning. Among these uses is the spread of disinformation, which now floods the internet. A recent analysis by Google has shown that AI is currently the main source of online image-based misinformation. The phenomenon is exacerbated by the increased accessibility of the tool, allowing anyone to generate any content with minimal technical expertise.

    google ai scam content
    Frequency of tactics by category.

    Each bar represents the frequency with which a tactic was identified in the dataset. Each case of misuse could involve more than one tactic. Image: Nahema Marchal.

    However, while previous studies provide valuable information on the threats associated with AI misuse, they do not precisely indicate the different strategies that could be used for this purpose. In other words, we don’t know what tactics are being used by malicious users to spread false information. As the technology becomes more powerful, it is essential to understand how misuse manifests itself.

    The new study, conducted by a Google team, sheds light on the different strategies used to spread AI-generated false information on the internet. “Through this analysis, we highlight the main and new patterns of misuse, including potential motivations, strategies, and how attackers exploit and abuse system capabilities across modalities (e.g., images, text, audio, video),” the researchers explain in their document, pre-published on the arXiv platform.

    Abusive Uses That Don’t Require Advanced Technical Expertise

    The researchers analyzed 200 media reports of AI misuse cases between January 2023 and March 2024. Based on this analysis, key trends underlying these uses were identified, including how and why users use the tool in an uncontrolled environment (i.e., in real life). The analysis included both image, text, audio, and video content.


    The researchers found that the most frequent cases of abuse are fake images of people and falsification of evidence. Most of this fake content is reportedly spread with the apparent intention of influencing public opinion, facilitating fraudulent activities (or scams), and generating profit. On the other hand, the majority of cases (9 out of 10) do not require advanced technical expertise but rely more on the (easily exploitable) capabilities of the tools.

    Interestingly, the abuses are not explicitly malicious but are still potentially harmful. “The increased sophistication, availability, and accessibility of generative AI tools seem to introduce new forms of lower-level abuse that are neither overtly malicious nor explicitly violate these tools’ terms of use but still have concerning ethical ramifications,” the experts indicate.

    This finding suggests that although most AI tools have ethically compliant safety conditions, users find ways to circumvent them through cleverly formulated prompts. According to the team, this is a new emerging form of communication aimed at blurring the boundaries between authentic information and false information. The latter mainly concerns political awareness and self-promotion. This risks increasing public distrust of digital information while overloading users with verification tasks.

    On the other hand, users can also bypass these security measures in another way. If it is not possible to enter a prompt explicitly containing the name of a celebrity, for example, the user can download their photo from their search engine and then modify it as they wish by inserting it as a reference in an AI tool.

    However, the use of data solely from the media limits the scope of the study, the researchers noted. Indeed, these organizations generally focus only on specific information likely to interest their target audience, which can introduce biases in the analysis. Moreover, the document strangely does not mention any cases of misuse of AI tools developed by Google…

    Nevertheless, the results offer a glimpse of the extent of the technology’s impact on the quality of digital information. According to the researchers, this underscores the need for a multi-stakeholder approach to mitigate the risks of malicious use of the technology.

  • Microsoft AI CEO Says AI Use of Web Content Doesn’t Require Permission

    Microsoft AI CEO Says AI Use of Web Content Doesn’t Require Permission

    OpenAI’s CTO recently said that AI shouldn’t have replaced some creative professions in the first place. These comments sparked outrage, especially among professionals in the field. The situation is likely to worsen with new AI video generation tools such as Sora or Runway’s Gen-3, which draw from the opaque pool that is the web for their training. This topic has been at the heart of many debates since the emergence of generative AI, particularly concerning copyright. While rights defenders fight for an agreement, Microsoft AI’s CEO doesn’t seem concerned. He asserts that artificial intelligence can train on freely accessible web content without requiring prior agreement from creators.


    In an interview with American journalist Andrew Ross Sorkin (CNBC) at the Aspen “Ideas” festival, Mustafa Suleyman, CEO of Microsoft AI since March, gave his personal definition of intellectual property on the web. Responding to the journalist’s question about whether or not AI companies were stealing content from the web to train their large language models, Suleyman said: “I think when it comes to content that’s already on the open web, the social contract of that content since the ’90s has been fair use.” He added, “Everyone can copy it, recreate it, and reproduce it. That’s what we call ‘freeware,’ if you will, and that’s what we’ve understood.”

    Fair use or theft? OpenAI in the crosshairs of justice

    Microsoft AI thus believes that content published and freely accessible online belongs to everyone and can be used by LLMs. However, this is far from the case. Indeed, a court grants fair use, a legal defense that allows the use of copyrighted critiques, reviews, research, or articles. However, this implies that the court assesses whether the copied content harms the copyright holder. Nevertheless, what AI models do goes beyond this condition.


    Particularly, given the vast amount of content these models process daily, it’s difficult to determine the exact extent to which each task contributes to the algorithms in question.

    Suleyman does acknowledge that there is an exception to the rule, which he calls “the gray area,” which requires evaluation by the courts. By gray area, Suleyman describes a distinct category of companies and press organizations explicitly stating that they refuse indexing of their content by search engines, notably. “We then find ourselves in a gray area on which the courts will have to rule,” says Suleyman.

    In any case, all content or creation published on the web remains in principle protected by copyright, whether in France, the United States, or any other country.


    Moreover, it is precisely for the violation of this essential right that OpenAI and Microsoft are facing several lawsuits today, starting with that of the New York Times, filed in December 2023. Alden Global Capital followed with a series of lawsuits last May.

    In parallel, OpenAI has signed agreements with publishers and press groups such as Le Monde, Axel Springer, Financial Times, and News Corp. to use their content in exchange for remuneration. Is the company implicitly acknowledging, with this criticized gesture, that sites whose content is accessible to all should also require payment?