OpenAI launches bug bounty program to combat system vulnerabilities

OpenAI has set up a bug bounty program to address privacy and cybersecurity issues, and it will reward security researchers for identifying and addressing system vulnerabilities.

Amid privacy and cybersecurity issues, and recent bans in different countries, artificial intelligence (AI) company OpenAI has released a program to combat vulnerability concerns.

On April 11, OpenAI, the company behind ChatGPT, announced the launch of the OpenAI “Bug Bounty Program” to help identify and address vulnerabilities in its systems. According to the announcement, the program rewards security researchers for their contributions to keeping OpenAI’s technology and company secure.

OpenAI invited the global community of security researchers, ethical hackers and technology enthusiasts, offering incentives for qualifying vulnerability information. The AI company believes that expertise and vigilance will directly impact keeping its systems secure and ensuring users’ security.

The program launch follows a statement by the Japanese government’s Chief Cabinet Secretary Hirokazu Matsuno on Monday that Japan would contemplate incorporating AI technology into government systems, provided privacy and cybersecurity issues are addressed.

OpenAI suffered a data breach on March 20, where user data was exposed to another user due to a bug in an open-source library.

In the announcement, OpenAI said it has partnered with Bugcrowd — a bug bounty platform — to manage the submission and reward process, which is designed to ensure a streamlined experience for all participants. Cash rewards will be awarded based on the severity and impact of the reported issues. The rewards range from $200 for low-severity findings to $20,000 for exceptional discoveries.

Related: GPT-4 apps BabyAGI and AutoGPT could have disruptive implications for crypto

Safe harbor protection is provided for vulnerability research conducted according to the specific guidelines listed by OpenAI. Researchers are expected to comply with all applicable laws.

If a third party takes legal action against a security researcher who participated in OpenAI’s bug bounty program and the researcher followed the program’s rules, OpenAI will inform others that the researcher acted within the program’s guidelines. This is because OpenAI’s systems are connected with other third-party systems and services.

Magazine: All rise for the robot judge: AI and blockchain could transform the courtroom

Read Entire Article


Add a comment