If you’re scared of the implications of malevolent AI applications, you have good cause. If you are encouraged by the benefits of applications, you have good reason. See my latest experiment, below.
OMG This deep dive blew my mind. I’m beginning to think I’ve lived too long. Adaptation to a world that encompasses such applications of AI seems amazingly difficult but then so does the ability to hold 2 opposing beliefs at the same time and still function. Suddenly I’m happy to be 78 years old .
I don't know Sue, about the "happy to be 78" thing. If I was 40 I'd be confident that AI supercharging all sorts of health benefit discovery might outweigh some of the negatives. Of course, all of the TBD.
While I tend to be on the "this is scary, it must be regulated" side, it also refers back to another of your recent posts about tradeoffs in risk-taking. Do the potential benefits outweigh the potential risks? How should be define and measure these risks and benefits? Is there a way to mitigated the potential harms through regulation or market-based initiatives? At this point, I only have questions not answers. that said, I agree with you that it's here to stay. At a minimum, we need mechanisms to monitor AI's ongoing trajectory and its effects.
OMG This deep dive blew my mind. I’m beginning to think I’ve lived too long. Adaptation to a world that encompasses such applications of AI seems amazingly difficult but then so does the ability to hold 2 opposing beliefs at the same time and still function. Suddenly I’m happy to be 78 years old .
I don't know Sue, about the "happy to be 78" thing. If I was 40 I'd be confident that AI supercharging all sorts of health benefit discovery might outweigh some of the negatives. Of course, all of the TBD.
While I tend to be on the "this is scary, it must be regulated" side, it also refers back to another of your recent posts about tradeoffs in risk-taking. Do the potential benefits outweigh the potential risks? How should be define and measure these risks and benefits? Is there a way to mitigated the potential harms through regulation or market-based initiatives? At this point, I only have questions not answers. that said, I agree with you that it's here to stay. At a minimum, we need mechanisms to monitor AI's ongoing trajectory and its effects.
Use white hat AI to catch black hat AI?
Depends on who's wearing the hats!