Personal Comments
I loved this book and thought that it was one of the most insightful things that I have read in the recent past. The informative lessons regarding technology and its affect on the public was knowledge that all of us should attempt to understand. I felt as though the authors were speaking right towards me for most of the book and had never left me looking up more answers. The constant questions kept me on my toes and wanting more.
Interesting Information
There were major sections of this novel which spoke out to me including sections regarding the push towards efficiency and optimization, artificial (intelligence) bias, privacy and much more.
I will provide a notice here to say that if you have not read the book and plan to, I would stop reading as what I will further describe are directly from the book itself.
I would highly recommend reading the introduction of this book as it provides some great background and detail into the history of why this topic is so important. Our technological society has been so focused on becoming more “optimized” and efficient with our technology that we forget about the consequences of our quick actions. We can see this with groups like Facebook (Meta) and YouTube who have algorithms that go against the public but aim to achieve their own personal goals. What I mean by against the public is that it doesn’t think about the implications of their innovation. YouTube in specific is so focused on increasing the amount of watch time of their videos that they neglect the other contributing factors like misinformation and inappropriate content. This is because of their narrowminded tunnel vision towards one goal. While reading the book I would pay close attention to the product and company Soylent. The authors chose a great product (Soylent) as their main depiction of these issues.
Secondly, the book describes the influence that these major tech companies have over politics. Specifically, what effect these have over democracy. This portion of the book really made me think about what types of laws and regulations have been made by these large corporations. Especially in the recent years have we begun to take notice of what is happening here. With the CEO’s of the major four companies, Google, Apple, Amazon and Facebook, coming together in court in the recent years, their political influence is large. The second portion takes a long and insightful look into the power of these major corporations and the power they hold in the new technology age. A question I am still left with contemplates democracy in this new tech age and whether this is the “best of the worst” forms of government of our time. I encourage you to answer this question for yourself.
The two next sections spoke to me the most and have been subjects I’ve thought about before. The first relates to the bias between race, gender, and religion in our daily lives. Of course, this is related to computer algorithms and at this stage, more importantly, artificial intelligence. This section made me ask myself some good questions and brought about the subject of trying to navigate through personal bias. One would think that you can escape personal bias by developing a program to organize and sort information for you. However, artificial intelligence in these situations does not work like flipping a coin. The complex interactions and decisions that are made by these systems depend on those who code them and function within the system/society that we live in. Therefore, there is no doubt that racism and sexism would appear in these algorithms. The questions I was left with are
- Is navigating around certain biases (racism, sexism etc.), bias/segregation itself?
- Can we eliminate bias in algorithms and artificial intelligence? If so, how?
- If we make an algorithm that is as “accurate” as possible in its decision, then decide upon that decision, have we avoided any bias?
This whole section reminds me of the book Freakonomics which evaluates systematic racism in job applications and schools in one of its chapters. One of the uses of these algorithms is to mitigate this bias but we have been unsuccessful. My last remark is that AI is just an extension of ourselves. Therefore, it comes with our perceptions and knowledge. At this time, I do not believe we can develop an unbiased artificial intelligence as we do not fully understand our own biases.
Privacy also becomes a great matter of difficulty and discussion when evaluating technology in the 21st century. Even during the middle of the 1990’s, the government was very interested in being able to listen to the people and use surveillance as protection. This issue of surveillance has only grown since this time and has become a large political issue. This just relates back to a lot of what has already been discussed in the book and the power of these large tech companies. Again, the questions I have about this section involve the people and what we can do to feel protected and how it affects our actions. The main concerning issue I have is whether privacy and trust have a proportional relationship? By this I mean, does more privacy create more trust in the government and vice versa.
The last major section considers automation and freedom of speech. The major questions rely how we should optomize the world of technology through optimization of automation.
One of the largest points that stuck out to me is how the United States refuses to sign a ban on autonomous weapons along with China and Russia. To me this shows the severity of war and how it also plays into the structure of technological information.
Another thing to consider is the blockade that tech has on freedom of speech and what safeguards we should put on this. Has technology limited or expanded our freedom of speech? Facebook and other social media platforms act as a monarchy controlling our speech and what can be said online. Though this is not necissarily a bad thing as lots of information over the internet are subject to much more harsh connotations. Finally, this leaves me with these questions regarding technology, social media and free speech.
- What content is deemed “wrong” and should be removed from social media/internet? More importantly, who should determine what is to be removed?
- What could we say in the past that would no longer work today and would get people in trouble? This is something that a lot of famous people are getting into issues with, as they have said bad things in the past and it is all catching up with them.
- Could we use automation and AI to control/gatekeep conversations? We use it to control online speech and moderation?
- How would we eliminate bias in these algorithms for moderation?
- How can we decide which sources to trust?
- Should the laws that come along with free speech be updated for the platforms we communicate with? This would be similar to democracy being updated for a technological future.
My notes while reading through this consider these questions but also add in some informative comments about the subjects.
- Considering online disinformation, I believe that it is rampant because of the free speech protections that we give social media.
- Humans have gone from active information seekers (using the newspaper to read both sides of the stories, contacting people who experienced these events etc.) to passive information seekers who let the information come to them. Because of this, we can easily dig into holes of the same information.
- Quote Suppression of expression, is nothing less than the asphyxiation of the individual and suffocation of the society
- Lies travel quicker than the truth and the internet does not challenge our existing views.
- I found this to be an interesting, but political polarization has increased in those who don’t use as much social media. Social media is therefore not the main reason for polarization and can provide diverse views of information.
- Older people are more likely to pass along fake news stories to their friends.
- Another interesting point of contention is what is called motivated reasoning. Showing someone something that contradicts their beliefs will make them more likely to believe what they thought before. I think we are seeing this a lot right now.
Can AI Moderate Content?
- This was an interesting statistic about Facebook and their autonomous online moderation. Facebook is able to find porn, suicidal thoughts and bullying up to 99%, 92% and 50% of the time respectively. Bullying is difficult to eliminate because it is more personal and has more conditions.
- Humans who moderated content like these bots experience significant levels of PTSD.
- Lots of moderation has the ability to remove things that weren’t against the guidelines. However, this begs the question of what is worse, removing most of the bad posts with some errors or not at all?
- An interesting topic to note for the social media companies is since they are private they are allowed to remove and moderate speech without breaking the first amendment. With laws like this Facebook has created their own form of government on how they can run their site. This could be used to test new theories about how to govern technology and work towards a better future.
- Lastly, I feel that online speech should be regulated like the government, however, it should be adopted for the online environment and just like politics, it should be updated for the growing tech age.
Final Notes
The last chapter has the best title to encompass how this topic should end. Can Democracy Rise to the Challenge?
We must protect ourselves from the tech innovation that is growing in this world. We need to take time to educate ourselves on what these big companies are doing and how they are influencing our lives. Data collection and use is no joke and the rising terms of service lengths (do you read these?) are just making learning even more difficult. Just reading this and taking the time to read the book is a great first step to understanding what you can do to change the course of this history and to protect yourself. (Try and learn about the GDPR and CCPA).
When considering your online protection, think of it in the terms of driving a car. While driving, you not only need to rely on yourself to not end in an accident but you need to make sure that the people around you follow the rules of the road. This reliance is what we need in the tech world. We should be able to rely upon the rules by those that govern the internet and social media, rather than just ourselves.
Remember, these issues are completely systemic and there are large issues that need to be resolved. We must use a lot of effort to make sure that democarcy can rise up to the challenge of this growing environment. This is everyones responsibility, the same as the world climate change epidemic. Working together we can change the course of this technological future and control our lives to make these great innovations safe for everyone.
Call to action - Computer Programmers and engineers need to be held to the same ethic standpoints as doctors and everyone else. There are a lot of innovations that are coming out from this side of the world and they are really changing the environment. Computer scientists don’t really have a code of ethics or something that really holds them down to what they create. Though there are certain codes, they don’t mean too much with accredidation. The main point here is integrity and everyone should be held to the same standards of holding people together.