Sadly on this one while the fact itself is unclassified I can't discuss sources and methods without imperiling myself. As are many things about that asshole Dulles. Even long dead, the butt wipe has a lot of coverage.
Oh, I just meant any link or anything else surface level saying that. I just couldn’t find anything linking the two in my 5 min search. Don’t need to provide concrete evidence, I had just not heard that before.
The proposed policy is that companies will have to report to the government when they train an AI model beyond a certain amount of computing power (and it's a lot). It's actually a pretty minimal and reasonable step, uncharacteristic of our boomer government.
There is no requirement to hand over training data. If an agency wants your cloud data, they can already get that through national security letters.
There was a documentary about a call centre and for data protection reasons, banned outside phones on site. Your phone had to either be left in the car or at home. No exceptions and no mercy if you broke that rule. And that documentary was filmed before the proliferation of smartphones.
I remember a factory like that back in the flip phone days. Although it wasn't the phones per se they had an issue with, it was the cameras most of them had, which might be used to take pictures of trade secrets. So they only allowed non-camera flip phones and everything else had to be left in the parking lot.
You think that is good for you. It's not. Those calls are recorded. Twice in my life I had a random stranger hang up and call me from their cell phone to say what they couldn't say on a recorded line.
well, the situation is pretty similar to Elon giving all of your login info to an Israeli cybersecurity company with direct ties to Israeli intelligence.
The fun fact is that the US government collects so much data that it doesn't even know how to begin processing and analyzing all of that nondescript data goop.
I wonder if Elon is going to switch to Linux? At least there everything is modular and you can strip out any outside AI from your distribution. Some Linux and BSD distributions have a strict no AI policy if I recall correctly.
Aren't teslas filming all the time and uploading data to train their self driving AI? Don't you have to pay to unlock features of your car that already exist but are locked with software? A tesla is just a big rolling iphone, except not as well made.
People are stupid beyond belief. No amount of opsec lectures is going to do anything. You ban the device if you want to protect against the threat it represents.
Its not the grunts on the floor that will bring in the phones once they are banned. It will be the middle managers and higherups that are "too important" to let go of their smart phone.
It will be lisa the executive assistant that will have one (newest model iphone of course) that will flaunt the rules because her work is too important.
Data will be stolen and it will all be blamed of some grunt who brought in a phone and got fired 7 months back.
I once had a manager playing with his smart phone in a secure facility with classified info up on my screen and about blew a gasket. He got walked out a few weeks later.
Every halfway competent company already has their employees go through security training and regular refresher courses. But breaches still happen, because the average normie end-user is dumb.
Take a look at the proprietary google shit that gets uploaded to github every now and again. We have but one prayer: "Lord, may our enemies be stupid."
Security modeling is more complicated for organizations at that scale. Gotta assess the importance and sensitivity of various trade secrets, and compartmentalize accordingly. Being too secretive can compromise product quality, or the effectiveness of said secrecy policies themselves. "Wages of secrecy" is the term I'm aware of, from ESR's writings on open source software.
At Twitter, any such blanket policy would be absolute overkill, and result in a brain-drain and stiffening of corporate culture. Spacex and Tesla have more pervasive trade secrets, for the right experts to preform cost/risk analysis.
An OpenAI (or any similar cloud product) ban
is what any competent corporation should be doing. For appropriate opsec, a company has to have a general culture of literacy, merit, loyalty, and independent thinking, best summed by the saying "common sense is not so common".
This is quite reasonable. Elon always takes things to the extreme so he'll probably do the full ban or nothing at all. I doubt there's much in the way of secret ingredients at twitter he needs to hide. Doesn't he claim to want to open source everything anyway?
So much worse. Required to give the data to the govt,
Easy to read explanation.
https://www.datanami.com/2024/01/30/ai-companies-will-be-required-to-report-safety-tests-to-u-s-government/
Executive Order.
https://www.whitehouse.gov/briefing-room/statements-releases/2024/03/28/fact-sheet-vice-president-harris-announces-omb-policy-to-advance-governance-innovation-and-risk-management-in-federal-agencies-use-of-artificial-intelligence/#:~:text=The%20policy%20released%20today%20requires,is%20addressing%20the%20relevant%20risks.
Apple was founded as a CIA front in the first place. Steve Jobs' dad was friends with Allen Dulles.
They're made in China. China requires a government office in all companies.
What’s the basis for the last line about Dulles? I’m not finding anything searching around.
Sadly on this one while the fact itself is unclassified I can't discuss sources and methods without imperiling myself. As are many things about that asshole Dulles. Even long dead, the butt wipe has a lot of coverage.
Oh, I just meant any link or anything else surface level saying that. I just couldn’t find anything linking the two in my 5 min search. Don’t need to provide concrete evidence, I had just not heard that before.
The proposed policy is that companies will have to report to the government when they train an AI model beyond a certain amount of computing power (and it's a lot). It's actually a pretty minimal and reasonable step, uncharacteristic of our boomer government.
There is no requirement to hand over training data. If an agency wants your cloud data, they can already get that through national security letters.
They train AI using your data, you didn't give them permission to use.
https://x.com/elonmusk/status/1800265431078551973
If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation.
And visitors will have to check their Apple devices at the door, where they will be stored in a Faraday cage
There was a documentary about a call centre and for data protection reasons, banned outside phones on site. Your phone had to either be left in the car or at home. No exceptions and no mercy if you broke that rule. And that documentary was filmed before the proliferation of smartphones.
I remember a factory like that back in the flip phone days. Although it wasn't the phones per se they had an issue with, it was the cameras most of them had, which might be used to take pictures of trade secrets. So they only allowed non-camera flip phones and everything else had to be left in the parking lot.
You think that is good for you. It's not. Those calls are recorded. Twice in my life I had a random stranger hang up and call me from their cell phone to say what they couldn't say on a recorded line.
well, the situation is pretty similar to Elon giving all of your login info to an Israeli cybersecurity company with direct ties to Israeli intelligence.
Good point!
The fun fact is that the US government collects so much data that it doesn't even know how to begin processing and analyzing all of that nondescript data goop.
Why do you think AI is being pumped so hard? The feds want to automate Big Brother.
They don't have to analyze it all. Just store it somewhere they can search whenever they want to dig up dirt on someone.
That's one big pile of dirt.
Never taken database?
I wonder if Elon is going to switch to Linux? At least there everything is modular and you can strip out any outside AI from your distribution. Some Linux and BSD distributions have a strict no AI policy if I recall correctly.
And maybe release an X phone.
If he starts using Arch everyone will quickly know.
Dragon already runs on a homebrew linux. I suspect most of their OSes are Linux.
Replies are full of people calling out Elon for doing the same thing.
https://pbs.twimg.com/media/GPvgCdbbIAAVwfC?format=jpg&name=small :)
Is there a smartphone that doesn't spy on you? I thought android was just as bad.
Elon needs to put a minuscule amount of his money where his mouth is and fund a decent Linux phone.
He promised Tim Apple he wouldn't do it. (yet)
Aren't teslas filming all the time and uploading data to train their self driving AI? Don't you have to pay to unlock features of your car that already exist but are locked with software? A tesla is just a big rolling iphone, except not as well made.
while allowing an israel firm to collect all monetized users' data...
People are stupid beyond belief. No amount of opsec lectures is going to do anything. You ban the device if you want to protect against the threat it represents.
Its not the grunts on the floor that will bring in the phones once they are banned. It will be the middle managers and higherups that are "too important" to let go of their smart phone.
It will be lisa the executive assistant that will have one (newest model iphone of course) that will flaunt the rules because her work is too important.
Data will be stolen and it will all be blamed of some grunt who brought in a phone and got fired 7 months back.
[* it will all be blamed on some white guy - as others have the threat of violence to protect them]
I once had a manager playing with his smart phone in a secure facility with classified info up on my screen and about blew a gasket. He got walked out a few weeks later.
Every halfway competent company already has their employees go through security training and regular refresher courses. But breaches still happen, because the average normie end-user is dumb.
Yep.
Take a look at the proprietary google shit that gets uploaded to github every now and again. We have but one prayer: "Lord, may our enemies be stupid."
...and He answers!
The factory workers will comply. Some HR lady won't though and screw the whole thing up.
It's usually HR making decisions they don't understand.
They just don't care, and they don't take the time ( or are too stupid) to follow through.
Whenever you do something like this, you have to pay a premium to employees, because your competition does not have this requirement.
It’s very hard to hire in software right now.
They did that to themselves.
Security modeling is more complicated for organizations at that scale. Gotta assess the importance and sensitivity of various trade secrets, and compartmentalize accordingly. Being too secretive can compromise product quality, or the effectiveness of said secrecy policies themselves. "Wages of secrecy" is the term I'm aware of, from ESR's writings on open source software.
At Twitter, any such blanket policy would be absolute overkill, and result in a brain-drain and stiffening of corporate culture. Spacex and Tesla have more pervasive trade secrets, for the right experts to preform cost/risk analysis.
An OpenAI (or any similar cloud product) ban is what any competent corporation should be doing. For appropriate opsec, a company has to have a general culture of literacy, merit, loyalty, and independent thinking, best summed by the saying "common sense is not so common".
This is quite reasonable. Elon always takes things to the extreme so he'll probably do the full ban or nothing at all. I doubt there's much in the way of secret ingredients at twitter he needs to hide. Doesn't he claim to want to open source everything anyway?
That's all part of network design.
I worked a job from hell that blocked service intentionally. You couldn't even call 911. They got in a lot of trouble for that.
I reported them to the Fire Marshall. That man got noticeably angry over the phone.