Forums

AI Risks and Safety?

Quick find code: 261-262-58-65901463

Blasty
Feb Member 2017

Blasty

Posts: 9,319 Rune Posts by user Forum Profile RuneMetrics Profile
Hey everyone :)

I thought it would be interesting to have a thread in HLF about AI Safety, which is to do with ensuring that advances in AI are both beneficial and safe. I've been having some thoughts lately about how AI could be extremely helpful but also potentially risky, and I'd love to hear your thoughts on the topic.

What are your thoughts on AI, and AI Safety?

What kind of benefits could come out of advances in AI? What's the limit?

What kind of risks could come out of advances in AI? What's the limit?

What does it mean for AI to be safe? How can you guarantee AI safety?

What do you think the future of AI holds? What do think will happen within your lifetime?

If you're interested in diving deeper into the topic, it's definitely an interesting topic to look into. MIRI, OpenAI and the Future of Life Institute have a lot to say about it, and there's a post on waitbutwhy about artificial super intelligence that will probably give you a good idea of what that's about.

Looking forward to reading your responses :)
Blasty
// @BlastytheBlue // Blasty#5167
| Co-owner of Mine Nation

14-Apr-2017 17:13:05 - Last edited on 14-Apr-2017 17:20:19 by Blasty

Archaeox
Dec Member 2011

Archaeox

Posts: 53,399 Emerald Posts by user Forum Profile RuneMetrics Profile
The problem with having safeguards is that you need to know what it is you're safeguarding against.

Should AI become a superior form of intelligence, this would become impossible.
~~~~ Just another victim of the ambient morality ~~~~

~~ Founder of the Caped Carousers quest cape clan ~~

!! Slava Ukraini - heroyam slava !!

16-Apr-2017 16:00:06

Blasty
Feb Member 2017

Blasty

Posts: 9,319 Rune Posts by user Forum Profile RuneMetrics Profile
Archaeox said :
The problem with having safeguards is that you need to know what it is you're safeguarding against.

Should AI become a superior form of intelligence, this would become impossible.


Haha, I think you've pretty much summed up this topic.

http://imgur.com/a/ZW1mg
Blasty
// @BlastytheBlue // Blasty#5167
| Co-owner of Mine Nation

16-Apr-2017 16:32:53

Blasty
Feb Member 2017

Blasty

Posts: 9,319 Rune Posts by user Forum Profile RuneMetrics Profile
Hi all!

I just saw this article published by RT about a humanoid robot that can do a number of things including shoot dual-wielded weapons.

https://www.rt.com/viral/384933-russian-robot-shoots-two-guns/

https://cdn.rt.com/files/2017.04/original/58f33e2bc4618860628b4584.jpg

I thought it might help with getting a sense of the potential risks associated with AI.
Blasty
// @BlastytheBlue // Blasty#5167
| Co-owner of Mine Nation

17-Apr-2017 04:44:38

HeroicSnorro

HeroicSnorro

Posts: 2,081 Mithril Posts by user Forum Profile RuneMetrics Profile
Please bear in mind that I'm definitely not a programmer here or involved with AI and robotics in any way, so what I'm about to say may be completely inaccurate.

I would personally expect that any AI with intelligence approaching (or maybe even surpassing) that of humans would have certain core codelines to adhere to (have you seen the film 'I Robot'?) in order to ensure that they don't go all Terminator on our organic butts.

That said, the first of these AI will have to show how they will respond to humanity. Will they co-operate? Actively look for ways to overwrite the core codelines? Perhaps even hook them up to a simulation where a lot more of those AI's exist. Will they go for an uprising? Then just disable the one robot while it can be contained and mark that field of science as off-limits to everyone.

This is also why I think that it may not be a good idea to try to replicate human brain-like activity in AI. Logic dictates that humans are demanding in comparison with them, and interact quite destructively with their environments.

Something a lot safer (in my opinion) would be to have specialised AI. For example, have one which can simulate the effect of every single medicine on a certain patient, but is unable to tell you which way is north. Basically keep them contained to their respective fields, and don't even think about trying to hook all of those AI's together unless you're absolutely sure we don't have to start inventing time travel to fix that problem.

-=HeroicSnorro=-

Sometimes, less is more.

19-Apr-2017 05:59:14

Quick find code: 261-262-58-65901463 Back to Top