Remember Tay ? The opaque Twitter bot that got off track a few years ago? People were encouraged to interact with him and, within 24 hours, the model became misogynistic and racist because of the data he was ingesting from Twitter conversations.
It became a big joke for some, but it was a time of change (inflection) for the market - especially for those organizations that could be acting too fast.
They realized that they needed a more robust AI or with integrated protections, so that it could not be influenced so easily. And they still needed to take some time to establish AI rules and protections that would govern the types of 'appropriate' actions in certain situations.
Most of us don't think about algorithms until we make mistakes - but organizations need to proactively prevent discrimination by policing themselves and making decisions based on what is most appropriate for the customer.
It is the Cyber Security Specialist who is responsible for running a diagnostic on any changes in the information to verify any undetected breaches.
Justice, transparency, empathy and robustness must be the four main pillars of responsible AI policy for all companies. Although organizations cannot slow down, it is necessary to unite around a set of fundamental principles of respect for the customer and that bring a sustainable (and probably profitable) vision for long-term success. This will benefit everyone.
In addition to being the right thing to do, it will also protect and strengthen the relationship with customers, brands and financial results, regardless of what the next crisis is.