.For the assignment, I watched Joy Buolamwini's TEDTalk and read this article on Silicon Valley. Some quotes that stood out for me included these: 1. “Algorithms, like viruses, can spread bias on a massive scale at a rapid pace” (Joy Buolamwini) Right from the beginning of her TEDTalk, this quote caught my eye. I know a lot of people have well-founded issues with algorithms but it’s the first time I’ve seen them referred to as a virus, and I found it disturbingly appropriate. 2. "You can hire employees all day, and if you don’t have accountability in your workplace they’re all gonna leave.” (Y-Vonne Hutchinson via Will Oremus) I liked this quote from the article a lot because it calls out big companies on their ineffective, short-sighted and short-term solutions. Their current approaches are shoddy. It’s like putting your food in a broken refrigerator, only for it to spoil, but for some reason you decide the best solution is to get new food instead of fixing the fridge? This solves nothing and you’re priming the next batch of people to be abused in the same way those before were. 3. ""[Proctorio has] claimed to have heard of ‘fewer than five’ instances where there were issues with face recognition due to race”” (Akash Satheesan via Todd Feathers) Frankly, I found this quote hilarious in its audacity. If you wish to fake a statistic on a well-known issue, at least do so believably because this is almost insulting. The fact that some companies can be so willfully ignorant or flat out dishonest when it comes to legitimate issues is both concerning and anger-inducing. Some Newfound Concerns...Some concerns that stood out to me related to the severity of this problem in programs and people. I remember vaguely hearing about the abuse faced by employees who are PoC in the tech industry, but I never imagined it was this wide-spread or this bad. That people can still face full-on racism within these well-known companies and have to leave because of it without those at fault facing the consequences infuriates me to no end - and I suppose my lack of awareness about it highlights just how widespread this ignorance may be. And the same thing goes for the algorithms that run facial recognition programs; I’ve never had to use one so that might account for why I was never aware of the problem, but the fact that such faulty systems are being used the police and government is an issue I have trouble wrapping my head around. They’re aware of its flawed nature and the detrimental mistakes it can make. Why knowingly use it? And in the case of Proctorio, why lie so blatantly about it? In-Class ConnectionsI think what’s discussed in this video and article can be linked back to various discussions we had regarding identity. To put it in the words of Satheesan, things like these racist algorithms serve to dehumanize individuals. In its lack of recognition, it withholds a piece of their identity. And in the bigger issue of how some employees are treated in the tech industry, this is highlighted even more; any time a person faces prejudice or discrimination on the basis of how they look or where they’re from, it’s a threat to individual identity. More to the point, the pieces I watched and read directly relate to our recent discussion in class regarding facial recognition programs and their inability to identify people of color. I’d even say it could relate to the ((((video)))) we watched in class about the girl who was being denied her anxiety medicine because the monitoring on her phone made a false assumption. It’s all terribly reminiscent of the limitations and flaws in technology and how much more problematic they can become the more they’re integrated into our daily lives. Future Implications and Plans From HereI don’t have any direct relations to the kinds of technology presented in the video and article (at present), but with technology on the rise, I don’t want to assume none of these issues will affect me or those around me. Because it isn’t just a matter of faulty facial recognition; algorithms go way beyond that and can and will be used for purposes that most definitely will impact our everyday lives. It could affect any of our interactions online, the people we chat with, or even the social media we favor. The people around me aren’t likely to remain impartial either. For example, I have a relative who is very involved in AI and coding; he might actually be directly involved in dealing with issues like bias in algorithms. It could become a problem he must actively solve, rather than the more abstract concept that requires outside comment that it is for me.
Now that I’m aware of this, I see it as my responsibility to try and speak up about it. I haven’t been in any classes that use proctoring, but I know people who have so I could potentially spread the word about how faulty it is. I may not be in a position to do anything immediately, but I’m never at a point where I can’t use my voice. If action is beyond me then raising awareness isn’t. I’ll also try to be more aware of my own biases and prevent them from influencing my own work - seeing as how biases so easily passed on from people to programs in the technological field. In general, having learned about how serious this issue is makes me more aware that being complacent isn’t going to cut it. It’s a problem to be approached by working actively.
0 Comments
Leave a Reply. |