It scares me because of the lack of accountability for the reasons for actions taken by AI programs.
I remember reading about a neural net program that was going to judge loan applications. The technologists were discussing the "giving a reason" issue. Their proposed solution was, if the program rejected a loan, then would then automatically increase the income until the program accepted it. Then they could say "if your income had been $X you would have gotten the loan, therefore your income is the reason for rejection." The problem is that the neural net could, in fact, be illegally discriminating on some other basis. If it were a human being, there might be memos that could be used to show that the bank was discriminating illegally, but a neural net is a black box with no way to prove that "income" wasn't the reason for rejecting the loan.
AI can be a good thing in a confined envirnoment, but it's out there and the problem is that we still don't know the path that leads to consciousness.
Currently as I know, AI is very good at detecting patters and this kind of ability is very usefull in the diagnostic field.