|
*subtitle shamelessly stolen from Jeoh Howdy folks! We're going to give this thread a try. If it doesn't work, we'll get rid of it! There's been a few times where the big IT threads have veered off into the topics of ethics in IT. Usually, those conversations are more off-topic than not, and not everyone wants to read a "political" discussion in every single thread on SA. So when they don't peter out naturally, us mods try to steer the conversation back on topic. But it's a pretty interesting topic! And it comes up often enough that I think people want a place to discuss it. So here we are! What are the ethics of working for a certain employer, industry, or government? Or how about the ethics of working in IT in general? What are some of the ethical issues around infosec, or even around having the "keys to the castle" for a small company? What obligations, if any, do IT workers have in society at large, considering their outsized power and privilege? Where are IT workers getting taken advantage of, and what can we do about it? Now, for some ground rules:
Now, with all that being said, I'm looking forward to the discussion. I really want to hear what SH/SC has to say about this topic and I really think this could be a great thread. Who wants to start us off? What's been on your mind lately? Internet Explorer fucked around with this message at 21:26 on Jun 26, 2021 |
![]() |
|
![]()
|
# ? Feb 9, 2025 18:27 |
|
There is no ethical computing under capitalism. (USER WAS PUT ON PROBATION FOR THIS POST)
|
![]() |
|
I try and follow the Bill and Ted laws of being excellent to one another, but there’s only so far you can go. I wouldn’t work directly for an oil company for example, and would try and avoid working on a project for an oil company, but if you’re going to rule out doing network consulting for a marketing company because they have Shell as a client then you’re going to find that there’s not really anything you can do - and we all need to earn money. In general whether it’s intentional or not a lot of the emphasis on individual responsibly to avoid climate change by changing habits seems to be done in a way to deliberately distract from the impact of maybe 30 companies. Not to say individual choices can’t have an impact but it’s not the biggest problem.
|
![]() |
|
I worked for a small (500~ employees, IT team comprising 10 actual workers and about 5 do-nothing-directors) company that was bought by a massive big pharma company (40,000 employees, god knows how many 'contract' workers). What shocked me was receiving an email from the CEO telling us to tell our congresspeople to vote against a bill that was meant to make drugs more affordable for people. "It will stifle innovation". I'd never seen the quiet part said out loud like that before. I will probably never work in such a company ever again, but holy poo poo was it amazing having budget to buy whatever the gently caress I needed. I think the shitpost above is right though, nothing is totally ethical under our system. Even local socialist organizing seems to rely on big tech. It's just varying degrees of awfulness, with oil and pharma near the top. droll fucked around with this message at 23:15 on Jun 26, 2021 |
![]() |
|
droll posted:I worked for a small (500~ employees, IT team comprising 10 actual workers and about 5 do-nothing-directors) company that was bought by a massive big pharma company (40,000 employees, god knows how many 'contract' workers). What shocked me was receiving an email from the CEO telling us to tell our congresspeople to vote against a bill that was meant to make drugs more affordable for people. "It will stifle innovation". I'd never seen the quiet part said out loud like that before. I will probably never work in such a company ever again, but holy poo poo was it amazing having budget to buy whatever the gently caress I needed. Depending on the state that is technically illegal. Had a client do this as well and they caught fines. It usually falls under lobbying, and when your employer does it even if they don't say so, it implies coercion. CommieGIR fucked around with this message at 00:10 on Jun 27, 2021 |
![]() |
|
I don't know if this is within the intended scope of this thread, but I am curious about detailed discussion of ethics in artificial intelligence. This isn't my professional field, but I find the topic interesting. I have passing awareness of the issues that have been more in the public eye over recent years, especially around racism, sexism, and general bias that emerges in AI, but I imagine that's just the tip of the iceberg. While an important topic in its own right, there has to be a much more robust and wide-ranging discussion happening somewhere about ethics in AI that goes beyond issues of bias.
|
![]() |
|
It's about ethics in granting yourself access to the HR mailbox. ask my previous co-worker
|
![]() |
|
Cithen posted:I don't know if this is within the intended scope of this thread, but I am curious about detailed discussion of ethics in artificial intelligence. This isn't my professional field, but I find the topic interesting. It's not totally focussed on the AI topic, but this is a very accessible book I read on the issues of designing in bias https://www.ruinedby.design/ Watch a couple of Mike's recorded presentations and if you enjoy them the books are worth a read.
|
![]() |
|
Cithen posted:I don't know if this is within the intended scope of this thread, but I am curious about detailed discussion of ethics in artificial intelligence. This isn't my professional field, but I find the topic interesting. Would be happy to see this discussed here!
|
![]() |
|
I can say that for as bad a rap as local government gets in general, the IT shops are generally (generally) extremely ethical. As non-competitive entities, lots of municipalities share services or at least strategies around solving certain problems, and, as a rule, most take their role of being stewards of public dollars very seriously. I'm an IT Director for a municipality and have been in the field for 14+ years, starting at helpdesk and moving up the ranks to executive leadership. Integrity and ethics are two ideals I live by and expect from my teams.
|
![]() |
|
Worth noting that AI is both a very broad term for a bunch of technologies and a buzzword, the big issue is we don't actually have AI, but trained relational stacks. A lot of the biases are from what the AI is trained on.
|
![]() |
|
Don't facial recognition and other "anti-theft/anti-petty-crime" things also tend to be much more heavily deployed in lower socioeconomic or minority areas, which ends up adding fines and jail time and the ability to lock someone out from their only few places to shop at semipermanently? I know in the mid-6-digit median income areas here, nobody wants facial recognition and there aren't even obvious cameras at self checkouts ("because there's no theft"). But go to Walmart an hour away and there are mobile police towers with 360 degree PTZ IP cameras that automatically zoom into you when you look at it, automatic licence plate scanning on entry and exit, each self checkout is recording you from multiple angles, steal food once because you need to and you're almost certainly guaranteed to get caught with HD footage, while in areas without all this automation and parking lots with robots roaming it'd just be written off as shrinkage
|
![]() |
|
I work for a private company that operate a number of prisons (amongst a bajillion other things - we are a giant global company, but i just work in the prison bit.) I get the impression private prisons are a plague upon the US but I do not work there and I sleep fairly easily witnessing the way my company operates from an ethical perspective so that is good. What I do have to highlight is the very strict rules about how we can and cannot make money -for example if the prisoners spend money on commissary we have to use the profit from that money to invest in making the jail a better place for prisoners benefit - there is also a government dude on site who signs off on such projects to keep it all transparent. From an IT perspective One really bad thing about my particular company is we had a head of infosec about 10 years ago who created such a fear of risk across the company we are behind our competitors to this day. For example, you were never allowed wifi or cell phones in prisons for fairly obvious reasons, since the pandemic this has relaxed - the government gave us data enabled ipads to allow prisoners to virtually attend funerals via zoom. In fairness, the haste at which that was deployed is probably the opposite end of the security spectrum as it was a a response to covid The issue our company has is there is now guidance about deploying wifi securely that we can follow but we are still trying to find a way to issue people with PDAs that dock at the end of shift to upload their work instead of bothering with wifi - this is just because our attitude to any risk is to not do something, rather than mitigate appropriately
|
![]() |
|
I worked for Time Warner Cable for a few years. They pulled a guy from high up in IT to code the new database (I don't know the particulars) but they never gave him the title or pay raise. After 6 months he asked about it and they pretty much said "Once the database is set up you can go back to being on IT" as if it was good news. He quit a couple weeks later, just before Thanksgiving. Turns out he wrote a little killswitch in to the system and it locked itself on Thanksgiving day requiring managers to come in. They said the code was bad and they were able to get around it easy, but he put pictures of the text messages up on Facebook first of them raging at him and telling him he's going to jail and then, later, texts of the manager begging for the key. Please remember that Time Warner Cable (Now Spectrum) is a terrible company that loves wage theft
|
![]() |
|
I think legally he would have been hosed if they'd wanted to pursue it, though IANAL. The lesson is that being asked to do something above your normal duties is the start of a pay negotiation.
|
![]() |
|
Thanks Ants posted:I think legally he would have been hosed if they'd wanted to pursue it, though IANAL. This is the correct answer. Unfortunately, while I feel for him, setting up kill switches and stuff like that is just an easy way to go to jail or have fines levied against you, and the company WILL win. I've had to do a few cases like that for clients, and while some of them were major dicks, most of them were being taken advantage of and it really did bother me helping build cases and evidence against them.
|
![]() |
|
yeah its much easier to just write poo poo code and be bad at code and then either it breaks and you fix it and are the hero or you get fired and it breaks totally legal version! certainly not upset about cleaning up after a "30 years of experience" guy that learnt how to program in c, didn't want to understand integer overflow or underflow and instead just used strings for everything, then doing parseInt() or parseLong() as needed
|
![]() |
|
I'm not saying the guy was smart or wouldn't go to jail. It was funny to watch the manager who days before was texting begging hands emojis is boasting that she is so much smarter than this dumb coderBiowarfare posted:yeah its much easier to just write poo poo code and be bad at code I'm already halfway there ![]()
|
![]() |
|
Cyber Punk 90210 posted:Please remember that Time Warner Cable (Now Spectrum) is a terrible company that loves wage theft (Now Lumen lol) They have entered the "change our name every 6 months" cycle, and some friends of mine who are still with them like to gripe about the constantly changing signatures that they have to use I just don't use a signature at all, and assume people can read my email address
|
![]() |
|
Now that CEH is toast can we become Certified Un-ethical Hackers?
|
![]() |
|
How the gently caress can ML based programs not consider the data they were trained on not part of the program? Specifically I am thinking of github copilot. Its an ML tool that generates code by guessing the next line a developer wants to write. Thing is it has been trained on billions of lines if open source code, much of which is under licenses like GPL which prohibit reproduction. It reproduces open source code verbatim regardless of the licence of the code it "learnt" from. https://mobile.twitter.com/mitsuhiko/status/1410886329924194309
|
![]() |
|
SMEGMA_MAIL posted:Now that CEH is toast can we become Certified Un-ethical Hackers? wait did something happen to CEH?
|
![]() |
|
Cithen posted:I don't know if this is within the intended scope of this thread, but I am curious about detailed discussion of ethics in artificial intelligence. This isn't my professional field, but I find the topic interesting. As someone who has coded neural networks and other types of machine learning algorithms from scratch (using feedforward, backprop, etc...), I can tell you that they have no inherent bias when identifying features in data. In neural networks, each node in the network initially starts with the same weight, which is updated and propagated throughout the neural network with every other training data point. Granted, machine learning is only as good as the data. If you feed the neural network garbage data in, then you'll receive garbage data out. This includes the omittion of valid data. Trust me, the bias coming out of "AI" systems is much more likely to reflect human biases of researchers who try to make the algorithm output their own biased expected results, rather than a network gradient descent machine learning algorithm getting the answer flat out wrong. Machines are not biased, but the people (and thus the data fed to the machine) can and often are. The most important way of combating this bias is to ensure that the data being fed into the machine is factual, correct, and complete.
|
![]() |
|
Jowj posted:wait did something happen to CEH? Yup, EC-Council plagiarized a bunch of blogs articles on security. This is not their first issue as well, they had some major misogyny issues a year or so ago
|
![]() |
|
CommieGIR posted:Yup, EC-Council plagiarized a bunch of blogs articles on security. This is not their first issue as well, they had some major misogyny issues a year or so ago that seems rather unethical
|
![]() |
|
CommieGIR posted:Yup, EC-Council plagiarized a bunch of blogs articles on security. This is not their first issue as well, they had some major misogyny issues a year or so ago cueh,
|
![]() |
|
The Gadfly posted:As someone who has coded neural networks and other types of machine learning algorithms from scratch (using feedforward, backprop, etc...), I can tell you that they have no inherent bias when identifying features in data. In neural networks, each node in the network initially starts with the same weight, which is updated and propagated throughout the neural network with every other training data point. Its more than data -- good data alone doesn't mean your ML algorithm won't produce a model that is biased, if we could even define or get perfectly good data from the real world in the first place. Consider how everyone in ML likes to measure a models efficacy -- false positives and false negative rates. Those metrics in a vacuum (and they're always used in a vacuum) do nothing to prevent bias and even encourage it, even if you have a magically representative dataset those metrics will absolutely allow an algorithm to decide that being wrong 100% of the time about a minority that makes up .5% of the population rather than 2% distributed through the other 99.5% is the far better model. Minimizing the mean error function blindly is itself a bias and the ML field is absolutely abysmal at understand the shape of the error space and what that implies for their application. Blaming the researchers is oversimplification and won't solve the problems with fairness AI has. We take error functions and algorithms that are as naive as a small child, slam a bunch of data taken from the real world into them, and are somehow surprised when they say hosed up things. apseudonym fucked around with this message at 20:11 on Jul 4, 2021 |
![]() |
|
Whats the solution to that tho? Provide a dataset that doesn't reflect the real world?
|
![]() |
|
Don't pretend that ML is magic fairy dust your can sprinkle over important decisions and then use to excuse yourself when it inevitably goes wrong.
|
![]() |
|
oh, I thought there was some solution people were looking at beyond the "having common sense" that basically everyone lacks
|
![]() |
|
RFC2324 posted:Whats the solution to that tho? Provide a dataset that doesn't reflect the real world? Volmarias posted:Don't pretend that ML is magic fairy dust your can sprinkle over important decisions and then use to excuse yourself when it inevitably goes wrong. In short, this. But from the theory side there has been more interest lately of slightly less naive looks at error but I'm a bit behind on papers so ![]() At a minimum you should understand the failures of your model and the especially bad ones and actively test for them. Since models from modern algorithms are borderline un-understandable except through evaluation explicitly testing for biases is required. The first thing I ask anyone who is doing ML is "tell me about the cases your model gets wrong", if people don't have any idea you should expect some surprising failures to show up in the worst possible way. E: Point being, "Its just bad data or bad researchers" understates just how deep the problem goes.
|
![]() |
|
The point is that machines do not have a particular bias. Node weights are technically biases, however they are the result of strictly data for the purpose of finding optimal features. In machines, a bias concretely does not exist at all before the data set starts being evaluated. People, on the other hand, are almost always biased before, during, and after the data is evaluated.
|
![]() |
|
![]()
|
# ? Feb 9, 2025 18:27 |
|
The Gadfly posted:The point is that machines do not have a particular bias. Node weights are technically biases, however they are the result of strictly data for the purpose of finding optimal features. In machines, a bias concretely does not exist at all before the data set starts being evaluated. People, on the other hand, are almost always biased before, during, and after the data is evaluated. Your evaluation function immediately introduces a bias to minimize the mean error over all else that is absolutely a bias and one that causes problems in any real world system Even if the implementer was perfectly ethical that would not mean the result of their ML project is. Its easy to say "oh these ML fairness issues are the results of bad people but I wouldn't do that" or "it wouldn't happen if we were all good people" but its not true or productive in fixing things. Its not some deep evil conspiracy that resulted in the never ending stories of ML bias or ethics in IT but naivete both in the tools we use and the way we approach our problems and put blind faith in tools.
|
![]() |