AI_Safety_Security -- List to bring researchers, academicians and practitioners who are interested in the safety and security of Artificial Intelligence | |
|
|
About AI_Safety_Security | |
Artificial Intelligence (AI) presents opportunities to greatly improve medicine, product development, and even teaching itself, yet there are also concerns that it may be misused or deployed in unsafe and potentially catastrophic ways. Given the challenges of testing and releasing safe AI, ensuring that AI deployment follows state-of-the-art security procedures is critical. At the same time, AI itself could be used to find and exploit security vulnerabilities in code or generate phishing attacks. We need both AI in security and security in AI. Given recent AI technical advances and recent regulatory discussions, now is the time for the CSU community to build a forum where these issues can be discussed and on occasion speak as a community about these issues. With the recent formation of the US AI Safety Institute Consortium (USAISIC) by NIST in response to an executive order last year, topics within USAISIC will provide a framework for our initial discussions. The GitHub page for the AI Safety and Security forum is: https://github.com/SteveKommrusch/CSU_AISafetySecurity, where more information is available. |
|
Using AI_Safety_Security | |
To post a message to all the list members, send email to
ai_safety_security@lists.colostate.edu.
You can subscribe to the list, or change your existing subscription, in the sections below. |
|
Subscribing to AI_Safety_Security | |
Subscribe to AI_Safety_Security by filling out the following form. You will be sent email requesting confirmation, to prevent others from gratuitously subscribing you. This is a private list, which means that the list of members is not available to non-members. | |
AI_Safety_Security Subscribers | |
|
version 2.1.39 |