TLDR: We should call it AI governance instead of AI regulation because that offers us an ample possibility of policies, guidelines, and frameworks on how to develop, commercialize and use autonomous software, before involving Congress.
A.I. regulation has resurfaced as a hot topic, but to me, regulation is a tricky word.
When one thinks about the word regulation, one usually defaults to juridical regulation (i.e. Law). However, Lawrence Lessig’s (1999) “pathetic dot theory”, reminds us that, although Law is the most evident form of regulation, there are. 4 constrains (which Lessig call regulators) that apply to a certain conduct altogether at once. Those regulators are:
the market and,
architecture (which in the case of conducts within a digital information systems is source code).
According to Lessig's theory, those four regulators interact altogether. In other words, all those elements work together to encourage or prevent a certain specific conduct from the regulated subject, which is represented by the pathetic dot. Check out this graphic to fully understand what I'm talking about.
While I like the theory proposed by Lessig, I find it a bit complicated to understand when you use the words regulation and regulators. Instead, I like using the words governance and governing tools. Some may say this is po-ta-to / po-tah-to kind of thing, but I think that changing both concepts can help (specially my lawyer colleagues) to understand the depth of this idea. I mean, it's not about the Law, but rather about the creation of solid frameworks to govern the creation and deployment of important technology.
Applying this theory (with these minor adjustments) to autonomous software, one might notice that we have different tools to achieve “A.I. governance”. With this new mindset, our worldview opens up. We don't to rely on Congress to start building the governance of our systems. Think of it this way. Instead of trying to enact some rigid set of rules that will (probably) take a long time to be amended if they turn out to be incomplete, we can try different avenues to understand what we need first. For example, universities may develop frameworks based on research, business organizations may use those frameworks to develop guidelines, corporations may use those guidelines to build code for autonomous software, and if convenient, executive agencies may develop their own recommendations. With time (and enough trials and errors), Congress may use all these tools and all our acquired knowledge to enact comprehensive ad hoc legislation.
My point is, we should approach “A.I. regulation” on Lessignian terms. Autonomous software is not (only) governed by the law, but rather, by the market, social norms, and source code. To fully understand this concept, I believe we should call this subject “autonomous software's governance”, instead.
Yes, I know it doesn’t sound as trendy, but I am aiming for actual knowledge, not social media likes.
Anyway, that’s it for now. Conde out!