I thought it would be interesting to have a thread in HLF about AI Safety, which is to do with ensuring that advances in AI are both beneficial and safe. I've been having some thoughts lately about how AI could be extremely helpful but also potentially risky, and I'd love to hear your thoughts on the topic.
What are your thoughts on AI, and AI Safety?
What kind of benefits could come out of advances in AI? What's the limit?
What kind of risks could come out of advances in AI? What's the limit?
What does it mean for AI to be safe? How can you guarantee AI safety?
What do you think the future of AI holds? What do think will happen within your lifetime?
If you're interested in diving deeper into the topic, it's definitely an interesting topic to look into. MIRI, OpenAI and the Future of Life Institute have a lot to say about it, and there's a post on waitbutwhy about artificial super intelligence that will probably give you a good idea of what that's about.
i will be long dead before the robot uprising. but the current generation of humans wont have a good time. wouldnt even be considered good slaves
Old School
Please bear in mind that I'm definitely not a programmer here or involved with AI and robotics in any way, so what I'm about to say may be completely inaccurate.
I would personally expect that any AI with intelligence approaching (or maybe even surpassing) that of humans would have certain core codelines to adhere to (have you seen the film 'I Robot'?) in order to ensure that they don't go all Terminator on our organic butts.
That said, the first of these AI will have to show how they will respond to humanity. Will they co-operate? Actively look for ways to overwrite the core codelines? Perhaps even hook them up to a simulation where a lot more of those AI's exist. Will they go for an uprising? Then just disable the one robot while it can be contained and mark that field of science as off-limits to everyone.
This is also why I think that it may not be a good idea to try to replicate human brain-like activity in AI. Logic dictates that humans are demanding in comparison with them, and interact quite destructively with their environments.
Something a lot safer (in my opinion) would be to have specialised AI. For example, have one which can simulate the effect of every single medicine on a certain patient, but is unable to tell you which way is north. Basically keep them contained to their respective fields, and don't even think about trying to hook all of those AI's together unless you're absolutely sure we don't have to start inventing time travel to fix that problem.