U16 Social Media Ban Doomed to Fail.

Well, the clock is ticking towards the e-Karen’s deadline for the under 16’s age verification.

If you want to be in Australia or talk about Australia and to use social media or indeed search engines, you’re going to have to prove to the satisfaction of the Australian government that you really are over 16 years of age. We’re being assured that the technology to do this exists and it’s all good and it’s going to be able to be rolled out and it’ll be seamless and it’ll be painful and it’s not an invasion of privacy.

Well, my next guest has read their report. They did a $6 million study seeking to prove to us all that it is really all good, that the technology is just fine. An age assurance technology trial. Well, my next guest, Dr. Reuben Kirkham from the Free Speech Union of Australia, isn’t quite convinced. Dr. Kirkham, thank you so much for joining me on the Topher Project.

[From video] Thank you for having me. [End video]

Now, this is a classic case of governments and bureaucrats spending taxpayers money on a taxpayer funded report. This is the age assurance technology trial that we’ve got here. You’ve spent a few days going through this in detail. So for those of us that don’t have the time to do that, bring us up to speed. Has this trial really proved that there are going to be no technological hitches?

[From video]

It’s not proved anything. I think that’s the key message to get from this. If you look into the background of the people doing the study, most of them seem to have no expertise in computer science. Most of them seem to be activists and you look at the track record of the stakeholder advisory board. You look into the people leading the study um many of them are associated with the online safety act in the UK. Um one of them the deputy director I believe of this trial. His main scientific qualification is a degree in Equine studies which is about horses in from Charles Stewart University um about 25 years ago. So the people doing this are not qualified to be making these assessments and a lot of the things they’re reporting on are self assessments by these technology companies.

So, for example the privacy aspect um of all these systems is essentially they’ve asked them to report on their privacy policies not their practices and those practices could change tomorrow. So, we don’t know at all whether these systems will actually respect privacy and that’s the first big problem. But what they are doing what their story is is essentially that um you can um you know a computer will automatically assess your age um on your device um with a sufficient accuracy that you’re very unlikely if you’re over 20 to be bothered by this. Well, I’m not convinced by that at all.

What I think they’re doing is overegging um methods that are allegedly um more privacy sensitive, saying they’ll work for most people um as an excuse to get to the second step, which is we’re not sure enough we’re going to ID you. And that is the game um behind this. And you look at the track record of most of the people behind the report, they’re not neutral scientists. They’re not qualified scientists. Um they’re not high performing experts. They’ve got one person who did a peer review um Toby Walsh um who seems to have some expertise um in the area. Most of them are policy officers um they’re not experts on the technicalities of privacy. They’re not experts on for example fair AI um which is an important area given they’re saying that this won’t discriminate against indigenous people and it won’t discriminate against disabled people. Well, they definitely haven’t proven that to anyone’s satisfaction. [End video]

Well, we’ve seen these sorts of things before. They’ve gone viral where, for example, AIs have been taught to to perhaps favor one skin color over another when being asked to generate images. Before that, we saw facial recognition software. It was all the rage to have group photos put online and there’d be little boxes around everybody’s face. I don’t know whether you came across this particular craze, Reuben, or whether it was just me. The the AI or the image processing would detect faces and would very often miss the faces of African-American people, etc. And it unless it’s been proven that this isn’t going to be a problem, uh there are people with many different kinds of disabilities that might affect how they appear to some sort of an AI.

Uh but I want to dive into some of the language here, uh doctor, because I find some of this really interesting. It’s a little bit like when you when you’re buying a house, there’s a certain language that you have to get used to from real estate agents where they will say one thing, but they mean something else entirely. A classic example is if a if a house is listed as an opportunity, you know, filled with opportunity. Well, you know that that’s just real estate talk for this place needs to get knocked down and redeveloped from the ground up. And I find this one here. Where is it? Here. This one here. Point 4. This is on the um on the actual report website itself. Insurance.com.au/report.

Point 4. A wide range of approaches exist, but there is no one size fits-all solution for all contexts. Now, at face value, that sounds like they’re just, you know, reporting to you the facts of what they found. Actually, if you think about it, what they’re admitting is that there’s nothing that works in all contexts. They’re going to have to hodge podge and do exactly what you said, which is to say, well, we’ve done our first pass. It hasn’t worked. We’re now going to move on to the second layer, the the third layer. At what point, I mean, if they’re going to ram this down our throats on December the 10th, that is not very far away. All of these companies are expected by the government to have this in place and implemented. Do you think there’s any possibility of the government delaying the implementation deadline just in recognition of the fact that the technology just is not ready for the prime time?

[From video]

Um, no. They will pretend it’s ready and it’s going to be interesting when that happens. But I would say given the legal test is reasonable steps. None of these things are reasonable steps. None of these steps would comply for example with anti-discrimination law. You know the race discrimination act. And I think what is telling is where are the numbers right? That’s the, you know, you go and read a real scientific paper, you don’t have these vague statements. You have we’ve proven this with this standard or not necessarily proven, but you know, we’ve got evidence of this to this standard because you can’t really prove anything to us. I mean, science is never about absolute proof, right? You know, Newton’s laws have had modifications to them by people like Einstein, right? We understand things but they are laws you can use to design an aircraft and not crash it right.

There’s nothing like that in this paper. Study on um race is particularly bizarre. So they’ve taken a bunch of photos at least as I understand it and split them into white um not black but darker and then dark and they’ve shown it works less well on dark people but apparently not too badly. But I mean Indigenous people aren’t African-American and the thing about machine learning systems is they have to be trained on data. So indigenous I guess they about the diversity of indigenous people right in this case it’s going to be a big problem because they’re not going to train it on the data that they need to make this work and um the reality is that this is based on like your facial contours. I don’t think indigenous people have exactly the same subtleties of facial contours as say African-Americans or Indians. Um, some might, but some won’t. And therefore, um, the error rate is going to be a lot higher.

Disabled people, um, they’ve not tested it on disabled people. What they said is they’ve got to do accessibility. Well, accessibility isn’t the same as actually having your system working. Accessibility is things like, you know, making sure that you press the right put the button in the place they can access it. The um, contrast on the screen is correct. So, you know, someone who’s color blind can see it. The font size is big enough for someone who’s got a v an impairment of their vision, right? So they can’t see so clearly. Um but you know they’ve got alternatives so deaf people can interact with it for example if it’s you those sort of things right. So um that’s actually what accessibility is about. It doesn’t show the performance. There’s no performance figures as to how well this will work with people with say I don’t know cerebral palsy for example who might move their faces differently or someone who’s you know had um got like you know something that changes the look of their appearance you know there’s quite a fairly different that do that um or someone who’s got you know a different you let’s say someone’s got dwarfism’s got a differently shaped face you know like nothing wrong with that apart from it’s not going to work very well for an age verification system um or any facial recognition system.

You know, it’s the same type of thing, right? It’s the same underlying um technology, right? It’s just recognizing slightly different things, but you know, all it is is for classification system, right? You feed an example example example example example example and it learns a rule. But if you’re outside that rule that it’s learned, then it’s not going to work very well on you. [End video]

Yeah. And I mean all of what we just all of what we’ve discussed so far has been really debating whether the technology will work for the the policy objectives that have been laid out by the government and by the e safety commissioner. Uh but of course we should ask ourselves whether the policy even if the technology was up to the task would the policy actually achieve its objectives.

I was reading just the other day about another scare campaign being whipped up about AI generated images where they’re they’re creating child abuse material using AI generated content, putting a real babies or a real child’s child’s face onto AI generated content of a very disgusting nature. And what was being discussed by these with a straight face by these policy makers was the fact that these images are being shared around on the dark web. And so what they need to do is to ban these sorts of AI generation tools, not seemingly not making the connection that if the people they’re targeting are already accessing the dark web, they’ve already got the onion router set up, they might have tails on their computer, they’ve already got the the technological knowhow to do that, then a geolocated ban on a particular AI software tool here in Australia is not going to have any effect on on their actions at all. And in fact, what we’re going to see in the case of the under 16s ban is a lot of under 16s are going to go to tools that are outside of the Australian ecosystem altogether. They are going to figure out how to access the dark web if they haven’t already, or they’re simply going to switch across to uh platforms, social media networking platforms that are outside of the Australian regulation that are located and based in countries that don’t care about the Australian laws.

Are we actually creating here a an opportunity in a very a very grim opportunity really for the kinds of child abuses and the kinds of of material that the government says they want to protect children from? Are we actually going to be pushing these children towards it?

[From video]

Absolutely. So you’re taking basically you go on a VPN. What do you think the kids are going to be going? It’s not going to be going to YouTube. They’re going to be going to I don’t know whatever the Russian equivalent is. It’s going to it’s going to go to things that are definitely unsafe. Right. So you you can it’s basically going from bad to worse. And there are really simple solutions to the issue, right? You know, don’t give your kid a phone, right? If you don’t want your kid on the internet and they could build, you know, build lockdown phones, right? It’s not hard to make phones that are do the census on device which actually protects kids, right?

Yeah. And that is something that you know the government could have spent the $6 million on this age verification developing a modified version of Android and say well look you know you it’s not hard to put a rule in place because that’s what you do with knives or alcohol right you can only buy phones if you’ve got an ID right fine right that’s perfectly reasonable um but then you’re not checking what the content is or who has that phone or who’s using it um that I think is a key distinction and then parents can you have a lockdown version for their kids and the kids engage in some sort of guard rails and those guard rails, but they could be set up to very easily alert a parent if the kids trying to access porn. Um the wonderful thing about AI is it presents opportunities for ondevice um monitoring that you couldn’t do before. So you could for example AI could pre-scan the video that someone’s looking at. The AI could for example limit the number of notifications. So what they should be doing is looking at how do we improve um the system rather than going oh yes we’ll just ban kids from social media we’ll ban them we’ll protect them we’ll ban them I mean we’ll ban kids from drugs we’ll ban kids from alcohol we’ll ban kids from rude words we’ll ban kids from um from child abusers but um you know we’ll ban just ban it banned banned we’ve told the tech company it’s so typical of government when they’re trying to regulate something that they quite clearly don’t understand. [End video]

These people uh that most of them are in their 60s or or 50s and they really don’t have an understanding of what it is that they’re trying to regulate. I’m mindful of the fact that the the federal Senate has actually approved a parliamentary inquiry into the under 16s ban. I interviewed Senator Malcolm Roberts about that a little while ago and all of the terms of reference for this particular inquiry were focused on the technology and whether the technology was actually going to work properly which is the the answer or the question I guess that this study was seeking to answer. Senator Malcolm Roberts put forward a motion to broaden the terms of reference with with one new point to consider and to look at the role of parents in protecting children in their activities online. And Labor and the Greens teamed up to make sure that that change to the terms of reference was not included. It seems as though the government is actually actively trying to deny us the opportunity to even talk about some of these other alternatives. Do you think there’s a bigger agenda at play here? There’s something else going on behind the under 16s ban that the government wants it so badly.

[From video]

Well, they want to control the internet. They want to do all they can to censor the internet. I mean, you had Alan Easy ranting today about AI’s um and freedom of information requests and how AI’s can make freedom of information requests and it’s like uh well really is that a problem? Like even if you’re correct about that I mean what’s so wrong about a citizen say using chat GPT to make an FOI request that is more precise, more focused, more thought through. I mean what’s so bad about that? What’s so wrong about say a tool that helps people make more structured FOI requests, you know? Um so he was ranting and raving about um this tool that I made. He didn’t name me um because we made a tool to send FOI requests and the E safety commissioner’s office. [End video]

Sure.

[From video]

And um the point of the tool was to make it reasonably easy people to make requests but also to structure it. So you know if the e safety commissioner was remotely competent which they’re not right it would have been very easy to deal with those requests. It would have taken them five minutes um because they have had no information because they shouldn’t have been keeping that information. The reason it cost the e safety commissioner so much money was because they don’t know how to search their own systems. Their staff don’t know how to use computers properly. Um, and I had a very weird discussion with one of my freedom of information requests and it was so bizarre that I was like, well, I offered to have a meeting with them to explain how to search their own systems properly. And they were like, uh, well, um, okay. And I was like, okay, just run this command. You don’t need to. This will give you a list. Just give me a list of files and I can see which ones I might be interested in. Um, and the commissioner is I think he had 4,000 records on the free speech union had some reference to his at some point and it’s like well I mean they’re probably all people just sending emails around the workplace going oh yay what the free speech union’s done this now. Um they’re probably all debating the freedom of information requests they got in. It’s like 600 requests. Well they only bothered replying to 150 of them or so because they said that because 450 of them used X accounts as their reply addresses which is completely legal it’s perfectly legal thing for you to do. They just decided they were going on not reply to them. So, they only really replied to about 150 of them and it’s like most of them are like, “Are you spying on me?” Well, if you’re not, it’s easy to say, “Sorry, no information.” Um, yeah. Problem is they were spying on people. And there was one post and I think an Australian lawyer, they had about 10,000 pages on like it’s only going to cost you material if you keep it, isn’t it? I mean, this is the thing. It’s just clueless.

But the problem Yeah. There is the 50 to 60 year old issue, but there are some smart 50 to 60 year olds, right? Sure. But there are it’s that it’s just the most incompetent people end up doing the online safety stuff. The people who’ve got no technological qualifications and it should be like so you know when they come in they should say have you got a PhD in computer science what did you study? What was it technical? Because you have to also watch out for people now doing um computer science PhDs are not technical like qualitative studies of how people interact with computers. So, you know, might sit and watch people interacting with Zoom and understand their behavior, right? A valuable human factor to study. Um, but not exactly the technical expertise you need. And the reality is that we run rings around the E safety commissioner’s office because they don’t know what a computer is half the time. They’re clueless. Um, you know, it’s it’s amusing and sad at the same time, but probably this age assurance study, they probably actually people running it probably actually believe that what they were doing was impartial and correct. They just didn’t have the ability to or the background knowledge to execute it and they didn’t even comply as far as I can tell with the normal Australian ethics requirements for research. There’s like research ethics rules in Australia um which are a bit stricter than the UK and the people doing it are actually based in the UK that are associated with Ofcom a lot the people running it and it’s like you look at it it’s like no academic could be allowed to do this in Australia. Um, you’re not allowed to appoint your own ethics committee. Um, if you make an ethics committee, there’s supposed to be an open application process for people to be appointed to it. Not half your ethics committee being from one company. Um, and all from industry. I don’t think any of them were actual academics on that committee. Um, one of them was an apprentice. I think one of them might been equin studies person. Um, another one had a classics degree. Um, their only background was working in offcom and a few other jobs they did. um you know they’re out bail out of university some of these people right and this was an ethics board they’re the traditional indigenous person on it to make sure all the indigenous stuff was dealt with apparently um but they didn’t have anyone who really understood this and put it this way and the time frame that they’re running this it’ll probably take an academic that time frame to get approval for the study really which isn’t good either but the process they were running um it just didn’t meet ethical standards of research based on what I’ve read and there may be something I missed, but it doesn’t look like it. I mean, I made a long list of about 20 different points where the national statement was not followed. Um, and though it’s not a binding legislation, but still it’s really concerning that you have um this amateurish approach towards doing research. It and you look at the bibliography, they’ve not even cited most of the papers properly. They’re things like um extended abstracts, which basically means work in progress, right? They’re citing them as if they’re like locust class on this along with a whole bunch of e safety commissioner reports. Um you can go through the whole thing. It’s got a lot of sleek marketing on it, but does it look like a scientific paper? No. Why does it not look like a scientific paper? Because most of it’s not actually very good science. Um I mean they’ve got some things sort of right. They’ve sort of used the right performance metrics to a point but they’ve not explained that properly and they I think yeah they’ve come with a cognitive bias of you know believing in that these things might just work. I’ve not seen any evidence in that showing that they do work. Um not what most people would see as working. Something that’s privacy sensitive that people can verify for themselves at open source. um something that’s going to be accurate enough um that you’re not going to have to go on to that second stage of um handing over your credit card details or your passport. Yeah. Um which is apparently the legislation says that you don’t have to do that, but um I reckon they’ll probably find ways around it. Um the reality is everyone’s going to get VPNs. Um and then you think the USA is going to implement this? Um well, most states are asking. [End video]

This is the big question. This is the big question, Reuben, is with the with Trump having already stomped on the UK pretty hard regarding some of the restrictions they wanted to impose, particularly around putting an end to end‑to‑end encryption in the UK through Apple. The uh the US government got involved in that and the UK government backed down from that. I did some videos on that last week or the week before. We’re seeing a lot of pressure being applied to the EU. I can only hope that we actually see a lot of pressure being applied to Australia and to the e‑ safety commissioner from Donald Trump and from the US administration because these rules will affect Americans specifically because they’re designed to capture anyone that uses a VPN by classifying them as being in Australia based on the content that they are posting. Which means that American citizens posting about Australia or visiting Australia are going to fall foul of these restrictions and we can actually I’m sincerely hoping that that will draw Donald Trump into this particular conflict.

We’ll continue to watch this story as it develops, but Dr. Kirkham from the Free Speech Union of Australia, thank you so much for giving us a rundown on this sham of a study into the technological solutions for age verification. Uh it just seems like this entire process is one big balls up from start to finish. Thank you so much for joining us on the Topher project.

[From video] Thank you for having me. [End video]

say thankyou to Topher with a coffee: DONATE HERE