hilbert42 a day ago

A resident of said country here. Another questionable measure by Government to protect our mollycoddled, insufficiently-resilient society.

That said, a better approach would be to limit kids under certain age from owning smartphones with full internet access. Instead, they could have a phone without internet access—dumb phones—or ones with curated/limited access.

Personally, I'm not too worried about what risqué stuff they'll see online especially so teenagers (they'll find that one way or other) but it's more about the distraction smartphones cause.

Thinking back to my teenage years I'm almost certain I would have been tempted to waste too much time online when it would have been better for me to be doing homework or playing sport.

It goes without saying that smartphones are designed to be addictive and we need to protect kids more from this addiction than from from bad online content. That's not to say they should have unfettered access to extreme content, they should not.

It seems to me that having access to only filtered IP addresses would be a better solution.

This ill-considerd gut reaction involving the whole community isn't a sensible decision if for no other reason than it allows sites like Google to sap up even more of a user's personal information.

  • abtinf a day ago

    > Another questionable measure by Government to protect our mollycoddled, insufficiently-resilient society

    Complains about mollycoddling.

    > a better approach would be to limit

    Immediately proposes new mollycoddling scheme.

    • hilbert42 a day ago

      Mollycoddling kids is one thing, we've always done that to some extent. Mollycoddling adults is another matter altogether.

      • xboxnolifes 21 hours ago

        Both proposals are mollycoddling children. It just happens that one of them inconveniences adults.

        • strken 21 hours ago

          "Inconvenience" is downplaying the impact of not letting adults use incognito mode to search for things.

          Yes, right now search engines are only going to blur out images and turn on safe search, but the decision to show or hide information in safe search has alarming grey areas.

          Examples of things that might be hidden and which someone might want to access anonymously are services relating to sexual health, news stories involving political violence, LGBTQ content, or certain resources relating to domestic violence.

          • rendall 17 hours ago

            Also porn. Let's be honest, all of this energy expenditure is about porn.

            • roenxi 15 hours ago

              While anyone who wants to ban people looking at porn will be on side with this, the political oomph is probably more from authoritarians who are working towards a digital ID. Anyone who cares about the porn angle would be forced to admit this won't do very much. Anyone who wants to keep the wrong people out of politics would be quietly noting that this is a small but unquestionable win.

            • Cartoxy 17 hours ago

              is it tho because we have been doing porn since forever and porn is not gatekeeperd by SE at all.

              seams like long term slow burn to Gov tendrils just like digital ID and how desperate the example came across as to show any real function, contradictory even.

              Pivot, what about the children. small steps and right back on the gradient of slippyslope we are

            • ptek 13 hours ago

              Hmm people will go back to using lingerie catalogs or start using LLM prompts?

            • GoblinSlayer 16 hours ago

              People search porn in google? Because google is internet itself?

              • falcor84 15 hours ago

                Because it's easier to put your query into the address bar than to open a dedicated search page, and most people use Chrome with the default being Google search.

                • XorNot 14 hours ago

                  Absolutely no one searches for porn on Google except if they don't know the URL of an aggregator.

                  Which that one kid will tell everyone if they don't.

  • tacticus 20 hours ago

    > That said, a better approach would be to limit kids under certain age from owning smartphones with full internet access. Instead, they could have a phone without internet access—dumb phones—or ones with curated/limited access.

    This wouldn't allow them to watch gambling ads or enjoy murdoch venues.

    • hilbert42 19 hours ago

      Oh, the cynicism of some people. :-)

      Yes, that empire exported itself to where it would have the greatest effect—cause the most damage.

  • jolmg 10 hours ago

    > That said, a better approach would be to limit kids under certain age from owning smartphones with full internet access. Instead, they could have a phone without internet access—dumb phones—or ones with curated/limited access.

    Why should this be the government's responsibility rather than the parents'?

    • reaperducer 6 hours ago

      Why should this be the government's responsibility rather than the parents'?

      For the same reason that the government limits smoking and alcohol. Because the parents can't/won't.

      • jolmg 5 hours ago

        A teen can go to the store on their own and consume the cigarettes and alcohol right out the door without the parents knowing. There I can see why the parent would need the collaboration of greater society.

        But for a phone? A child/early-teen shouldn't be able to afford a phone nor contract with a cellphone-service-provider being underage. That should be collaboration enough. If they got a phone beforehand, it's because the parents themselves got it for them.

        Even considering a mid teen starting work, buying a phone and using it with WiFi, they can only really own things with the parents' approval. They can't really use it enough to form an addiction without the parents noticing and having the opportunity to confiscate it.

  • SlowTao 17 hours ago

    > Thinking back to my teenage years I'm almost certain I would have been tempted to waste too much time online when it would have been better for me to be doing homework or playing sport.

    That is true. I spent my time coding a 2D game engine on an 486, it eventually went nowhere, but it was still cool to do. But if I had the internet then, all that energy would have been put into pointless internet stuff.

    • kolinko 16 hours ago

      I had internet access since 13yo, although it was the internet of 1996, so it was way more basic.

      And for me it was a place to explore my passions way better than any library in a small city in Poland would allow.

      And sure - also a ton of time on internet games / MUDs, chatrooms etc.

      And internet allowed me to publish my programs, written in Delphi, since I was 13-14yo, and meet other programmers on Usenet.

      On the other hand, if not for internet, I might socialise way more irl - probably doing thing that were way less intelectually developing (but more socially).

      It just hit me that I need to ask one of my friends from that time what they did in their spare time, because I honestly have no idea.

      • jdcasale 12 hours ago

        I'd keep in mind that internet usage of 96 (I was there) bears no resemblance whatsoever to internet usage of today. The level of predatory sophistication of today's attention economy makes any sort of comparison between the two misguided at best.

      • bombcar 12 hours ago

        The Internet of 1996 and even of 2006 was a lot more “work” than the direct-into-your-eyebulbs Internet of today.

        YouTube didn’t start until 2005! Even just getting Flash working to watch Home*Runner was an effort.

    • johnisgood 15 hours ago

      I had the Internet when I was a kid and I ended up being a software engineer with useful skills in many different areas.

      You are wrong to blame the Internet (or today LLMs). Do not blame the tool.

      Sure I consumed sex when I was a kid, but I did a fuckton of coding of websites (before JavaScript caught up, but in JavaScript) and modding of games. I met lots of interesting, and smart people on IRC with mutual hobbies and so forth. I did play violent games, too, just FYI, when I was not making mods for them.

      • pferde 14 hours ago

        Could the difference between your experience and that of today's teenagers be in the fact that in your time, there were no online content farms hyperoptimized for maximum addictiveness, after their owners invested millions (if not billions) into making them so?

        • ta12653421 13 hours ago

          back then the web (or prior networks like Gopher, Usenet) were used and filled mainly by professionals working in the one or another field; and if you were online, you demonstrated already a basic tech undertstanding, since it wasnt as convenience as today. Sure, porn existed early on; but the "entertaining web content" was just not existing as today.

          • johnisgood 13 hours ago

            Yes, especially IRC. What people call today "gatekeeping" is exactly what gave IRC networks value.

        • johnisgood 14 hours ago

          Yes, I believe so. The only thing that was addicting to me was coding. It really was addicting. I did not leave the house all summer when I was >13 because I was busy coding. But then again, this "addiction" helped me a lot in today's world. That said, I am left with a serious impostor syndrome, however, and my social skills aren't the best, which is also required in today's world, by a programmer. :/

    • qingcharles 8 hours ago

      I spent time creating 2D and 3D game engines. It was a lot easier once the Internet arrived to me in 1993. I could connect with other like-minds and found a wealth of useful information.

      Sure, there was a lot of dicking around, but overall it was positive.

    • theshackleford 15 hours ago

      I had the internet as a youth, and it is pretty much entirely responsible for me having been able to build a social network and social capabilities, build the career I have today and ultimately break out of poverty.

  • Tade0 17 hours ago

    My take is just like we have allowance to introduce children to the concept of money, parents could use data allowance to introduce children to the concept of the internet.

    The worst content out there is typically data-heavy, the best - not necessarily, as it can well be text in most cases.

    • red_admiral 12 hours ago

      Money is, depending on the country, slowly evolving from physical coins/notes to plastic cards to pretend plastic cards on smartphones, to the same but you need an app to manage the account, to let's stop pretending and just use an app in the first place.

      The last one is difficult because you need a common standard, either someone becomes a monopoly (or two or three quasi-monopolies such as google/apple) or better still this is one of few cases where government regulation could do more good than harm.

      I think China is already close to the last phase at least in cities, going down the government regulated route?

      This is highly country dependent of course - in some places shops must accept coins by law, even if it's so unusual that you have to roll a critical success to get the right amount of change back.

      I would like a world where we can give children physical pocket money rather than some abstraction, and they don't need a smartphone of their own to check their balance. But we'll probably have to fight for that at some point.

    • closewith 16 hours ago

      That's a naïve view of the internet, where much of the worst experiences children have are in text via chat.

      • Tade0 14 hours ago

        Pretty sure a picture is still worth a thousand words. Also text is something you can prepare for, police if need be.

        Random visual internet content? Too many possibilities, too large a surface area to cover.

  • dzhiurgis 16 hours ago

    Australian gov can’t even enforce vape ban, how you’d expect smartphone ban to be enforced?

    • florkbork 13 hours ago

      What if the point isn't to enforce at the user level, but at the company level?

      30,000 penalty units for violations. 1 unit = $330 AUD at the moment.

  • florkbork 13 hours ago

    I find I am broadly supportive of these laws (The Online Safety Amendment (Social Media Minimum Age) Bill 2024), even if this specific regulation is a bit of pearl clutching wowserism.

    Why? If you read the original legislation https://parlinfo.aph.gov.au/parlInfo/search/display/display....

    You get 30,000 civil penalty units if you are a scumbag social media network and you harvest someone's government ID. You get 30,000 civil penalty units if you don't try to keep young kids away from the toxic cesspool that is your service, filled with bots and boomers raving about climate change and reposting Sky News.

    This absolutely stuffs those businesses who prey on their users, at least for the formative years.

    And when I think about it like that? I have no problem with it, nor the fact it's a pain to implement.

  • kypro 13 hours ago

    100% agree.

    The framing that explicit material is bad for kids, while probably true, is besides the point. Lots of things a parent could expose a child to could be bad, but it's always been seen as up to the parent to decide.

    What the government should do is ensure that parents have the tools to raise their kids in the way they feel is appropriate. For example, they could require device manufactures implement child-modes or that ISP provide tools for content moderation which would puts parents in control. This instead places the the state in the parental role with it's entire citizenry.

    We see this in the UK a lot too. This idea that parents can't be trusted to be good parents and that people can't be trusted with their own freedom so we need the state to look after us seems to be an increasing popular view. I despise it, but for whatever reason that seems to be the trend in the West today – people want the state to take on a parental role in their lives. Perhaps aging demographics has something to do with it.

  • theshackleford 16 hours ago

    > That said, a better approach would be to limit kids under certain age from owning smartphones with full internet access. Instead, they could have a phone without internet access—dumb phones—or ones with curated/limited access.

    This would be completely and utterly unenforceable in any capacity. Budget smartphones are cheap enough and ubiquitious enough that children don't need your permission or help to get one. Just as I didnt need my parents assistance to have three different mobile phones in high school when as far as they knew, I had zero phones.

    • account42 12 hours ago

      Which is of course why we don't bother making selling cigarettes and alcohol to children illegal. Except we totally do that because it largely works even if sufficiently motivated individuals can and do get around the restrictions.

      • theshackleford 11 hours ago

        Cigarettes and alcohol are consumbable products that must be acquired again and again. There are already millions of phones in open circulation and you only need to acquire it once.

        Even if you could stop phones, you wont stop them from accessing it from literally a near infinite supply of other devices.

        It's pure and utter fantasy.

        • account42 11 hours ago

          Mobile phone plans are still subscriptions and not buy for life things last I checked.

          • tzs 9 hours ago

            You don’t need a mobile phone plan to use free Wi-Fi to access the Internet.

  • bamboozled 21 hours ago

    [flagged]

    • graemep 15 hours ago

      > it would suck for the ruling class though because we'd have to stop feeding kids religion

      The ruling class in the west are generally extremely anti-religious. They have a good reason to be - the biggest religion in the west is anti-wealth (the "eye of the needle" things etc.) and generally opposed to the values of the powerful.

      The US is a sort of exception, but they say things to placate the religious (having already been pretty successful in manipulating and corrupting the religion) but very rarely actually do anything. I very much doubt the president (or anyone else) in the current US government is going to endorse "give all you have to the poor".

    • DiggyJohnson 18 hours ago

      > it would suck for the ruling class though because we'd have to stop feeding kids religion

      This seems out of place and unrelated. If anything Gen Z and presumable Alpha, eventually, are more religious than their parents.

    • frollogaston 20 hours ago

      or just don't get them smartphones

      • pmontra 19 hours ago

        Misinformation and propaganda are not only on smartphones.

        • fc417fc802 19 hours ago

          Still those do make it awfully easy to subscribe to notifications that actively push all sorts of problematic things onto you at an alarming rate. A high rate of exposure to something can lead to problems where there otherwise wouldn't be any.

marcus_holmes a day ago

2025: if you're logged in, then we check your age to see if you can do or see some stuff

2027: the companies providing the logins must provide government with the identities

2028: because VPNs are being used to circumvent the law, if the logging entity knows you're an Australian citizen, even if you're not in Australia or using an Aussie IP address then they must still apply the law

2030: you must be logged in to visit these specific sites where you might see naked boobies, and if you're under age you can't - those sites must enforce logins and age limits

2031: Australian ISPs must enforce the login restrictions because some sites are refusing to and there are loopholes

2033: Australian ISPs must provide the government with a list of people who visited this list of specific sites, with dates and times of those visits

2035: you must be logged in to visit these other specific sites, regardless of your age

2036: you must have a valid login with one of these providers in order to use the internet

2037: all visits to all sites must be logged in

2038: all visits to all sites will be recorded

2039: this list of sites cannot be visited by any Australian of any age

2040: all visits to all sites will be reported to the government

2042: your browser history may be used as evidence in a criminal case

Australian politicians, police, and a good chunk of the population would love this.

Australia is quietly extremely authoritarian. It's all "beer and barbies on the beach" but that's all actually illegal.

  • naruhodo 19 hours ago

    Mate...

    > 2038: all visits to all sites will be recorded

    That's been the case since 2015. ISPs are required to record customer ID, record date, time and IP address and retain it for two years to be accessed by government agencies. It was meant to be gated by warrants, but a bunch of non-law-enforcement entities applied for warrantless access, including local councils, the RSPCA (animal protection charity), and fucking greyhound racing. It's ancient history, so I'm not sure if they were able to do so. The abuse loopholes might finally be closed up soon though.

    https://privacy108.com.au/insights/metadata-access/

    https://delimiter.com.au/2016/01/18/61-agencies-apply-for-me...

    https://www.abc.net.au/news/2016-01-18/government-releases-l...

    https://ia.acs.org.au/article/2023/government-acts-to-finall...

    • SlowTao 17 hours ago

      I cannot find it any more due to the degradation of Google but there was a report on the amount of times this data was access in NSW for 2018 (?). It was something like 280,000 requests for that year alone!

    • closewith 16 hours ago

      Yes, the 2038, 2040, and 2042 scenarios are already reality in most of the world. We're in the dystopian nightmare.

  • incompatible a day ago

    > 2042: your browser history may be used as evidence in a criminal case

    We already reached that point several years ago.

    • marcus_holmes 21 hours ago

      yeah true, I should have made it more explicit that it's your entire browser history, and every criminal case

      • bravesoul2 16 hours ago

        Interesting question is warrant or no warrant required?

  • pmontra 19 hours ago

    > 2039: this list of sites cannot be visited by any Australian of any age

    Block lists are not new. For example Italy blocks a number of sites, usually at DNS level with the cooperation of ISPs and DNS services. You can autotranslate this article from 2024 to get the gist of what is being blocked and why https://www.money.it/elenco-siti-vietati-italia-vengono-pers...

    I believe other countries of the same area block sites for similar reasons.

  • red_admiral 11 hours ago

    My understanding of US current policy:

    > your browser history may be used as evidence in a criminal case

    Already the case. Mostly for the kind of dumb criminal who is suspected of murder and has been found googling "defences to murder" and "how to hide a body".

    > the companies providing the logins must provide government with the identities

    If there's a court order (good) or a national security letter (occasionally good but very open to abuse). Maybe the NSA or some guy in DOGE has automatic API access to this data anyway.

    > you must be logged in to visit these specific sites where you might see naked boobies, and if you're under age you can't - those sites must enforce logins and age limits

    Already the case for youtube and reddit content marked NSFW - either by the creator or by a fairly stupid algorithm. (You can see these boobies, but not those ones.) But the age verification is mostly "open a new account and enter a birth date". Also reddit has the dumbest age verification/login bypass ever. (Your honor, editing an URL is nation-state level hacking and we can't reasonably defend against that.)

    > all visits to all sites will be recorded

    Something something Permanent Record.

    > you must have a valid login with one of these providers in order to use the internet

    Ok this one is cheating a bit, but don't you need a google (or samsung etc.) account to set up an android let alone access the internet?

    Also cheating a bit but you need a login and contract with your ISP to get on the internet too.

  • m3sta 20 hours ago

    Australian politicians, police, and a specific chunk of the population would be exempt from this... like with privacy laws.

    • marcus_holmes 20 hours ago

      indeed. Rules for thee but not for me.

  • SlowTao 17 hours ago

    As others have pointed out how many of these are already present here. I suspect the rest of your time line is far to optimistic in how long it will take to get there. I suspect that with the pace of decline most of that will be enacted in the next 5 years.

    I would like to say "It is all because of X political party!" but both the majors are the same in this regard and they usually vote unanimously on these things.

  • tbrownaw 21 hours ago

    > 2030: you must be logged in to visit these specific sites where you might see naked boobies, and if you're under age you can't - those sites must enforce logins and age limits

    Some states in the US are doing this already. And I think I saw a headline about some country in Europe trying to put Twitter in that category, implying they have such rules there already.

  • almosthere 17 hours ago

    2027: ufos visit, and decide to end the human experiment, game over.

  • megablast 13 hours ago

    > your browser history may be used as evidence in a criminal case

    Pretty sure google searches have been used in murder trials before, including the mushroom poisoning one going on right now in Victoria.

  • closewith 16 hours ago

    > Australia is quietly extremely authoritarian.

    Not quietly, I don't think. Not like Australia is known for freedom and human rights. It's known for expeditionary wars, human rights abuses, jailing whistleblowers and protesters, protecting war criminals, environmental and social destruction, and following the United States like a puppy.

    • bravesoul2 16 hours ago

      US is the same but with a different leash-holding country.

  • Nursie 17 hours ago

    > your browser history may be used as evidence in a criminal case

    As others have said, that's the case already and not just in Australia. Same in lots of other places like the UK and the whole EU. Less so in the US (though they can demand any data the ISP has, and require ISPs to collect data on individuals)

    > Australia is quietly extremely authoritarian.

    It is weird, as a recent-ish migrant I do agree, there are rules for absolutely bloody everything here and the population seems in general to be very keen on "Ban it!" as a solution to everything.

    It's also rife with regulatory capture - Ah, no mate, you can't change that light fitting yourself, gotta get a registered sparky in for that or you can cop a huge fine. New tap? You have to be kidding me, no, you need a registered plumber to do anything more than plunger your toilet, and we only just legalised that in Western Australia last year.

    It's been said before, but at some point the great Aussie Larrikin just died. The Wowsers won and most of them don't even know they're wowsers.

    • cmoski 13 hours ago

      There are a lot of people changing their own light fittings. I have never heard about laws against plumbing but I don't see them stopping old mate from doing it.

      Electrical work can be pretty dangerous...

      • Nursie 13 hours ago

        Yeah, here in WA at least, there are signs up in the plumbing section of Bunnings saying “Stop! DIY plumbing is illegal! Only buy this stuff if you’re getting a professional to fit it!”

        The reasoning is often “people might contaminate the water supply for a whole street!” Which just points to poor provision of one way valves at the property line.

        But yeah, illegal.

        I agree there are limits with what you want to do on electricity, but turning the breaker off and replacing a light fitting or light switch is pretty trivial. And I know people do just get on with it and do some of this stuff themselves anyway.

        Was particularly pissed off that in January this year the plumbing “protections” were extended to rural residents who aren’t even connected to mains water or sewage, to protect us from substandard work by … making it illegal for us to do it ourselves. Highly annoying.

        • cmoski 13 hours ago

          Well there you go. Probably thanks to a plumbing lobby. Lucky we moved out of WA.

          A lot of laws can be interpreted as reccomendations :)

          • Nursie 11 hours ago

            I went as far as to look up the last ‘consultation’ on this, and yep, all down to the plumbing lobby, expressing their horror that the general public could be ripped off, scammed even, by unqualified “handymen”, so it must remain illegal to do even the basics if you’re unqualified, even on your own house.

            Total rort.

        • nullc 4 hours ago

          And let me guess, this rule isn't eliminated if your property is isolated by a reduced pressure zone device?

          I assume that in your post "WA" means Western Australia -- as I can't imagine this kind of absurd protectionism law flying in Washington state, even though it's a little more paternalistic than average for the US.

  • t0lo a day ago

    If only there was a name for this fallacy. Something slope something

    • SchemaLoad 21 hours ago

      Slippery slope is only a fallacy when there is no reason to believe the end state is likely or desired.

      It seems quite likely that governments want to continuously chip away at privacy.

      • its-summertime 20 hours ago

        Slippery slope is specifically about opening the gates to further slipping. This clearly isn't the case since there is going to be slipping regardless of this specific instance going through all the way or not.

        • fc417fc802 19 hours ago

          > It's wrong to call this a slippery slope because we're not at the top but instead already well on our way down a slope that is indeed slippery.

          Not a convincing take.

          • Cartoxy 16 hours ago

            Australia is already living in a full-blown surveillance state. Over 330,000 metadata access requests were approved in a single year—no warrant needed. Agencies like Centrelink, the ATO, even local councils can tap into your private data. Police get access to your web browsing history directly from ISPs without judicial oversight. Encryption is being quietly undermined through laws like the TOLA Act, forcing tech companies to help spy or weaken their own systems. The government now mandates that AI search tools filter and flag content, shaping what people can even find online. When the AFP raided the ABC, they had the legal power to copy, alter, or delete files. Add to that Australia’s deep involvement in the global Five Eyes intelligence-sharing network, and it's clear: this isn’t future dystopia, it’s surveillance as a fact of life. NBN monopoly + TR-069 as default hard locked and custom PCB in NBN hardware (even to the point of new PCB runs with all headers and test points even unpopulated removed) it tooks untill the new rev of arriss hardware before they even complied with the GPRD lisenceing. legit!

          • its-summertime 11 hours ago

            We aren't part way down the slope either. There is no slippery slope, they will press on citizens through every opportunity they can get, regardless of the progress they have or have not made in the past.

            Its more of a constantly lowering bar, not a slippery slope that just needs to be stopped once.

            Or in words you might find more appealing: Its worse than a slippery slope.

    • tbrownaw 21 hours ago

      So, is this particular slope likely to be slippery? Do governments have a history of looking for ways to control what information people can see, or looking for ways to identify people who post disfavored information?

bobbyraduloff 17 hours ago

Taken straight from the new regulation: “Providers of internet search engine services are not required to implement age assurance measures for end-users who are not account holders.”

How can you argue any of this is NOT in the interest of centralised surveillance and advertising identities for ADULTS when there’s such an easy way to bypass the regulation if you’re a child?

jackvalentine a day ago

Australians are broadly supportive of these kind of actions - there is a view that foreign internet behemoths have failed to moderate for themselves and will therefore have moderation imposed on them however imperfect.

Can’t say I blame them.

  • AnthonyMouse 20 hours ago

    > there is a view that foreign internet behemoths have failed to moderate for themselves and will therefore have moderation imposed on them however imperfect.

    This view is manufactured. The premise is that better moderation is available and despite that, literally no one is choosing to do it. The fact is that moderation is hard and in particular excluding all actually bad things without also having a catastrophically high false positive rate is infeasible.

    But the people who are the primary victims of the false positives and the people who want the bad stuff fully censored aren't all the same people, and then the second group likes to pretend that there is a magic solution that doesn't throw the first group under the bus, so they can throw the first group under the bus.

    • marcus_holmes 20 hours ago

      This. This legislation has got nothing to do with moderation or "protecting children" - that's just the excuse that the government is using to push the legislation through. There are better ways of achieving that goal if that was the goal.

      The actual goal is, as always, complete control over what Australians can see and do on the internet, and complete knowledge of what we see and do on the internet.

      • l0ng1nu5 18 hours ago

        Agreed but would also add the ability to prosecute anyone who writes something they don't like/agree with.

        • account42 12 hours ago

          They can already do that, see e.g. what the UK is doing in response to tweets. You don't need identity verification to have an ISP tell you the person behind an IP.

          • AnthonyMouse 2 hours ago

            IP addresses don't have anything like a 1:1 mapping to human beings and it's pretty trivial and inexpensive to get one from someone other than your ISP (e.g. use a VPN) if you have any concerns about that sort of thing.

      • globalnode 19 hours ago

        i think governments are confused by the internet. on the one hand business uses it to save money and pay taxes. broligarch's get rich from it. yet it exposes the unwashed masses to all sorts of information that might otherwise face censorship. theres always sex and drugs you can use as a reason to clamp down on things. the tough thing for them will be how do you reign in the plebs while also allowing business and advertising to function unfettered... tough times ahead :p

        p.s. i agree with your comment.

    • cmoski 13 hours ago

      I think it is less about stopping them from seeing naked pictures etc and more about stopping them getting sucked into the addictive shithole of social media.

      It will also make it harder for the grubby men in their 30s and 40s to groom 14yo girls on Snapchat, which is a bonus.

    • jackvalentine 20 hours ago

      > This view is manufactured. The premise is that better moderation is available and despite that, literally no one is choosing to do it. The fact is that moderation is hard and in particular excluding all actually bad things without also having a catastrophically high false positive rate is infeasible.

      Manufactured by whom? Moderation was done very tightly on vbulletin forums back in the day, the difference is Facebook/Google et al expect to operate at a scale where (they claim) moderation can't be done.

      The magic solution is if you can't operate at scale safely, don't operate at scale.

      • AnthonyMouse 19 hours ago

        > Manufactured by whom?

        https://en.wikipedia.org/wiki/Manufacturing_Consent

        > Moderation was done very tightly on vbulletin forums back in the day, the difference is Facebook/Google et al expect to operate at a scale where (they claim) moderation can't be done.

        The difference isn't the scale of Google, it's the scale of the internet.

        Back in the day the internet was full of university professors and telecommunications operators. Now it has Russian hackers and an entire battalion of shady SEO specialists.

        If you want to build a search engine that competes with Google, it doesn't matter if you have 0.1% of the users and 0.001% of the market cap, you're still expected to index the whole internet. Which nobody could possibly do by hand anymore.

        • jackvalentine 19 hours ago

          Maybe search is dead but doesn’t know it yet.

          Edit: you can’t just grow a Wikipedia link to manufacturing consent from the 80s as an explanation here. What a joke of a position. Maybe people have been hoodwinked by a media conspiracy or maybe they just don’t like what the kids are exposed to at a young age these days.

          • AnthonyMouse 19 hours ago

            > you can’t just grow a Wikipedia link to manufacturing consent from the 80s as an explanation here. What a joke of a position.

            Do you dispute the thesis of the book? Moral panics have always been used to sell both newspapers and bad laws.

            > Maybe people have been hoodwinked by a media conspiracy or maybe they just don’t like what the kids are exposed to at a young age these days.

            People have never liked what kids are exposed to. But it rather matters whether the proposed solution has more costs than effectiveness.

            > Maybe search is dead but doesn’t know it yet.

            Maybe some people who prefer the cathedral to the bazaar would prefer that. But ability of the public to discover anything outside of what the priests deign to tell them isn't something we should give up without a fight.

            • jackvalentine 19 hours ago

              I dispute you’ve made any kind of connection between the two beyond your own feelings.

              I put it to you, similarly without evidence, that your support for unfettered filth freedom is the result of a process of manufacturing consent now that American big tech dominates.

              • AnthonyMouse 17 hours ago

                The trouble with that theory is that tech megacorps are a relatively recent development, whereas e.g. the court cases involving Larry Flynt were events from the 1970s and 80s and the likes of Hustler Magazine hardly had an outsized influence over the general media.

                Meanwhile morals panics are at least as old as the Salem Witch Trials.

                • jackvalentine 17 hours ago

                  Megacorps, simultaniously impotent and trillion dollar companies.

                  • AnthonyMouse 17 hours ago

                    The US government has a multi-trillion dollar annual budget -- they spend more money every year than the entire market cap of any given megacorp -- and they can't solve it either. Maybe it's a hard problem?

      • g-b-r 19 hours ago

        Were web searches moderated?

    • bigfatkitten 20 hours ago

      > The premise is that better moderation is available and despite that, literally no one is choosing to do it.

      It’s worse than that. Companies actively refuse to do anything about content that is reported to them directly, at least until the media kicks up a stink.

      Nobody disputes that reliably detecting bad content is hard, but doing nothing about bad content you know about is inexcusable.

      https://archive.is/8dq8q

      • AnthonyMouse 20 hours ago

        Your link says the opposite of what you claim:

        > Meta said it has in the past two years taken down 27 pedophile networks and is planning more removals.

        Moreover, the rest of the article is describing the difficulty in doing moderation. If you make a general purpose algorithm that links up people with similar interests and then there is a group of people with an interest in child abuse, the algorithm doesn't inherently know that and if you push on it to try to make it do something different in that case than it does in the general case, the people you're trying to thwart will actively take countermeasures like using different keywords or using coded language.

        Meanwhile user reporting features are also full of false positives or corporate and political operatives trying to have legitimate content removed, so expecting them to both immediately and perfectly respond to every report is unreasonable.

        Pretending that this is easy to solve is the thing authoritarians do to justify steamrolling innocent people because nobody can fully eliminate the problem nobody has any good way to fully eliminate.

        • bigfatkitten 19 hours ago

          > Your link says the opposite of what you claim

          I don’t know where you got that from. Meta’s self-congratulatory takedown of “27 pedophile networks” is a drop in the ocean.

          Here’s a fairly typical example of them actively deciding to do nothing in response to a report. This mirrors my own experience.

          > Like other platforms, Instagram says it enlists its users to help detect accounts that are breaking rules. But those efforts haven’t always been effective.

          > Sometimes user reports of nudity involving a child went unanswered for months, according to a review of scores of reports filed over the last year by numerous child-safety advocates.

          > Earlier this year, an anti-pedophile activist discovered an Instagram account claiming to belong to a girl selling underage-sex content, including a post declaring, “This teen is ready for you pervs.” When the activist reported the account, Instagram responded with an automated message saying: “Because of the high volume of reports we receive, our team hasn’t been able to review this post.”

          > After the same activist reported another post, this one of a scantily clad young girl with a graphically sexual caption, Instagram responded, “Our review team has found that [the account’s] post does not go against our Community Guidelines.” The response suggested that the user hide the account to avoid seeing its content.

          • AnthonyMouse 19 hours ago

            Your claim was that they "actively refuse" to do anything about it, but they clearly do actually take measures.

            As mentioned, the issue is that they get zillions of reports and vast numbers of them are organized scammers trying to get them to take down legitimate content. Then you report something real and it gets lost in an sea of fake reports.

            What are they supposed to do about that? It takes far fewer resources to file a fake report than investigate one and nobody can drink the entire ocean.

            • riffraff 17 hours ago

              But this goes back to the original argument: maybe if you can't avoid causing harm then you shouldn't be allowed to operate?

              E.g. if you produce eggs and you can't avoid salmonella at some point your operation should be shut down.

              Facebook and its ilk have massive profits, they can afford more moderators.

              • AnthonyMouse 17 hours ago

                > But this goes back to the original argument: maybe if you can't avoid causing harm then you shouldn't be allowed to operate?

                By this principle the government can't operate the criminal justice system anymore because it has too many false positives and uncaptured negative externalities and then you don't have anything to use to tell Facebook to censor things.

                > Facebook and its ilk have massive profits, they can afford more moderators.

                They have large absolute profits because of the large number of users but the profit per user is in the neighborhood of $1/month. How much human moderation do you think you can get for that?

                • fc417fc802 16 hours ago

                  > By this principle the government can't operate the criminal justice system

                  Obviously we make case by case decisions regarding such things. There are plenty of ways in which governments could act that populations in the west generally deem unacceptable. Private prisons in the US, for example, are quite controversial at present.

                  It's worth noting that if the regulator actually enforces requirements then they become merely a cost of doing business that all participants are subject to. Such a development in this case could well mean that all the large social platforms operating within the Australian market start charging users in that region on the order of $30 per year to maintain an account.

                  • AnthonyMouse 16 hours ago

                    > Obviously we make case by case decisions regarding such things.

                    You can make case by case decisions regarding individual aspects of the system, but no modern criminal justice system exists that has never put an innocent person behind bars, much less on trial. Fiddling with the details can get you better or worse but it can't get you something that satisfies the principle that you can't operate if you can't operate without ever doing any harm to anyone. Which implies that principle is unreasonable and isn't of any use in other contexts either.

                    > It's worth noting that if the regulator actually enforces requirements then they become merely a cost of doing business that all participants are subject to. Such a development in this case could well mean that all the large social platforms operating within the Australian market start charging users in that region on the order of $30 per year to maintain an account.

                    The premise there is that you could solve the problem for $30 per person annually, i.e. $2.50/month. I'm left asking the question again, how much human moderation do you expect to get for that?

                    Meanwhile, that's $30 per service. That's going to increase the network effect of any existing service because each additional recurring fee or requirement to submit payment data is a deterrent to using another one. And maybe the required fee would be more than that. Are you sure you want to entrench the incumbents as a permanent oligarchy?

                    • fc417fc802 6 hours ago

                      That doesn't follow. The absolute of a principle being unobtainable doesn't mean it isn't of use. As I stated, you make a case by case judgment when applying it. That you aren't satisfied by the imperfection doesn't imply a lack of usefulness.

                      I expect you can get quite a bit of moderation for that price. If a given user is exceeding that then they are likely so problematic that you will want to ban them anyway. Speaking from personal experience, the vast majority of users never act in a way that requires attention in the first place.

                      If the law discriminates on size you don't end up with (or at least exacerbate) the oligarchy scenario. In fact it acts to counter network effects by economically incentivizing use of the smaller services.

                      • AnthonyMouse an hour ago

                        > That doesn't follow. The absolute of a principle being unobtainable doesn't mean it isn't of use. As I stated, you make a case by case judgment when applying it. That you aren't satisfied by the imperfection doesn't imply a lack of usefulness.

                        The principle was that if you can't operate without doing harm, you can't operate.

                        But then nobody can operate, including the government.

                        If you give up that absolutist principle and concede that there are trade offs in everything, that's the status quo and there's nothing to fix. They already have the incentive to spend a reasonable amount of resources to remove those users, because they don't want them. The unfortunacy is that spending a reasonable amount of resources doesn't fully get rid of them, and spending an unreasonable amount of resources (or making drastic trade offs against false positives) is unreasonable.

                        > I expect you can get quite a bit of moderation for that price. If a given user is exceeding that then they are likely so problematic that you will want to ban them anyway. Speaking from personal experience, the vast majority of users never act in a way that requires attention in the first place.

                        It's not about whether some specific user exceeds the threshold. You have a reporting system and some double-digit percentage of users will use it as an "I disagree with this poster's viewpoint" button. Competitors will use it to try to take down the competition's legitimate content. Criminal organizations will create fake accounts or use stolen credentials and use the reporting system to extort people into paying ransom or the fake accounts will mass report the victim's account, and then if even a small percentage of the fake reports make it through the filter, the victim loses their account. Meanwhile there are legitimate reports in there as well.

                        You would then need enough human moderators to thoroughly investigate every one of those reports, taking into account context and possibly requiring familiarity with the specific account doing the posting to determine whether it was intended as satire or sarcasm. The accuracy has to be well in excess of 99% or you're screwed, because even a 1% false positive rate means the extortion scheme is effective because they file 1000 fake reports and the victim's account gets 10 strikes against it, and a 1% false negative rate means people make 1000 legitimate reports and they take down 990 of them but each of the 10 they got wrong has a story written about it in the newspaper.

                        Banning the accounts posting the actual illegal content is what they already do, but those people just make new accounts. Banning the accounts of honest people who get a lot of fake reports makes the problem worse, because it makes it easier to do the extortion scheme and then more criminals do it.

                        > If the law discriminates on size you don't end up with (or at least exacerbate) the oligarchy scenario. In fact it acts to counter network effects by economically incentivizing use of the smaller services.

                        But that was the original issue -- if you exempt smaller services then smaller services get a competitive advantage, and then you're back to the services people actually using not being required to do aggressive moderation. The only benefit then is that you got the services to become smaller, and if that's the goal then why not just do it directly and pass a law capping entity size?

            • fc417fc802 18 hours ago

              Active refusal can (and commonly does) take the form of intentionally being unable to respond or merely putting on such an appearance. One of the curious things about Twitter pre-aquisition was that underage content somewhat frequently stayed up for months while discriminatory remarks were generally taken down rapidly. Post acquisition such content seemed to disappear approximately overnight.

              If the system is pathologically unable to deal with false reports to the extent that moderation has effectively ground to a standstill perhaps the regulator ought to get involved at that point and force the company to either change its ways or go out of business trying?

              • AnthonyMouse 17 hours ago

                > One of the curious things about Twitter pre-aquisition was that underage content somewhat frequently stayed up for months while discriminatory remarks were generally taken down rapidly. Post acquisition such content seemed to disappear approximately overnight.

                This isn't evidence that they have a system for taking down content without a huge number of false positives. It's evidence that the previous administrators of Twitter were willing to suffer a huge number of false positives around accusations of racism and the current administrators are willing to suffer them around accusations of underaged content.

                • fc417fc802 16 hours ago

                  I agree that on its own it isn't evidence of the ability to respond without excessive false positives. But similarly, it isn't evidence of an inability to do so either.

                  In the context of Australia objecting to lack of moderation I'm not sure it matters. It seems reasonable for a government to set minimum standards which companies that wish to operate within their territory must abide by. If as you claim (and I doubt) the current way of doing things is uneconomical under those requirements then perhaps it would be reasonable for those products to be excluded from the Australian market. Or perhaps they would instead choose to charge users for the service? Either outcome would make room for fairly priced local alternatives to gain traction.

                  This seems like a case of free trade enabling an inferior American product to be subsidized by the vendor thereby undercutting any potential for a local industry. The underlying issue feels roughly analogous to GDPR except that this time the legislation is terrible and will almost certainly make society worse off in various ways if it passes.

                  • AnthonyMouse 16 hours ago

                    > I agree that on its own it isn't evidence of the ability to respond without excessive false positives. But similarly, it isn't evidence of an inability to do so either.

                    It is in combination with the high rate of false positives, unless you think the false positives were intentional.

                    > If as you claim (and I doubt) the current way of doing things is uneconomical under those requirements then perhaps it would be reasonable for those products to be excluded from the Australian market.

                    If they actually required both removal of all offending content and a low false positive rate (e.g. by allowing customers to sue them for damages for removals of lawful content) then the services would exit the market because nobody could do that.

                    What they'll typically do instead is accept the high false positive rate rather than leave the market, and then the service remains but becomes plagued by innocent users being victimized by capricious and overly aggressive moderation tactics. But local alternatives couldn't do any better under the same constraints, so you're still stuck with a trash fire.

            • bigfatkitten 16 hours ago

              > but they clearly do actually take measures.

              Some times, but clearly not often enough.

              Does a refusal get more active than a message that says “Our review team has found that [the account’s] post does not go against our Community Guidelines”?

              > Then you report something real and it gets lost in an sea of fake reports.

              It didn’t get ‘lost’ — they (or their contract content moderators at Concentrix in the Phillipines) sat on it, and then sent a message that said they had decided to not do anything about it.

              > What are they supposed to do about that?

              They’ve either looked at the content and decided to do nothing about it, or they’ve lied when they said that they had, and that it didn’t breach policy. Which do you suppose it was?

              • AnthonyMouse 16 hours ago

                > Does a refusal get more active than a message that says “Our review team has found that [the account’s] post does not go against our Community Guidelines”?

                That's assuming their "review team" actually reviewed it before sending that message and purposely chose to allow it to stay up knowing that it was a false negative. But that seems pretty unlikely compared to the alternative where the reviewers were overwhelmed and making determinations without doing a real review, or doing one so cursory the error was done blind.

                > They’ve either looked at the content and decided to do nothing about it, or they’ve lied when they said that they had, and that it didn’t breach policy. Which do you suppose it was?

                Almost certainly the second one. What would even be their motive to do the first one? Pedos are a blight that can't possibly be generating enough ad revenue through normal usage to make up for all the trouble they are, even under the assumption that the company has no moral compass whatsoever.

            • coryrc 17 hours ago

              > What are they supposed to do about that?

              Do like banks: Know Your Customer. If someone performs a crime using your assets, you are required to supply evidence to the police. You then ban the person from using your assets. If someone makes false claims, ban that person from making reports.

              Now your rate of false positives is low enough to handle.

              • AnthonyMouse 16 hours ago

                This is the post people should point to when someone says "slippery slope is a fallacy" in order to prove them wrong, both for the age verification requirements and for making banks do KYC.

                But also, your proposal would deter people from reporting crimes because they're not only hesitant to give randos or mass surveillance corporations their social security numbers, they may fear retaliation from the criminals if it leaks.

                And the same thing happens for people posting content -- identity verification is a deterrent to posting -- which is even worse than a false positive because it's invisible and you don't have the capacity to discover or address it.

    • Nursie 17 hours ago

      > The fact is that moderation is hard

      Moderation is hard when you prioritise growth and ad revenue over moderation, certainly.

      We know a good solution - throw a lot of manpower at it. That may not be feasible for the giant platforms...

      Oh no.

      • AnthonyMouse 17 hours ago

        This is the weirdest theory. The premise is that you admit the huge corporations with billions of dollars don't have the resources to pay moderators to contend with the professional-grade malicious content by profitable criminal syndicates, but some tiny forum is supposed to be able to get it perfect so they don't go to jail?

        • fc417fc802 16 hours ago

          > but some tiny forum is supposed to be able to get it perfect so they don't go to jail?

          Typically you would exempt smaller services from such legislation. That's the route Texas took with HB 20.

          • AnthonyMouse 15 hours ago

            So the companies that exceed the threshold couldn't operate there (e.g. PornHub has ceased operating in Texas) but then everyone just uses the smaller ones. Wouldn't it be simpler and less confusing to ban companies over a certain size unconditionally?

            • fc417fc802 6 hours ago

              That's hardly a good faith interpretation of the goals behind the Texas law. Also HB 20 was social media deplatformimg, not identification.

              Notice that the goalposts shifted subtly from moderation of disallowed content to distribution of age restricted content. The latter isn't amendable to size based criteria for obvious reasons.

              Note that I don't think the various ID laws are good ideas. I don't even think they're remotely capable of accomplishing their stated goals. Whereas I do expect that it's possible to moderate a given platform decently well if the operator is made to care.

              • AnthonyMouse an hour ago

                > That's hardly a good faith interpretation of the goals behind the Texas law.

                It's plausible that it wasn't what some of the supporters intended, but that was the result, and the result wasn't entirely unpredictable. And it plausibly is what some of the supporters intended. When PornHub decided to leave Texas, do you expect they counted it as a cost or had a celebration?

                > Notice that the goalposts shifted subtly from moderation of disallowed content to distribution of age restricted content. The latter isn't amendable to size based criteria for obvious reasons.

                Would the former be any different? Sites over the threshold are forced to do heavy-handed moderation, causing them to have a significant competitive disadvantage over sites below the threshold, so then the equilibrium shifts to having a larger number of services that each fit below the threshold. Which doesn't even necessarily compromise the network effect if they're federated services so that the network size is the set of all users using that protocol even if none of the operators exceed the threshold.

                > Note that I don't think the various ID laws are good ideas. I don't even think they're remotely capable of accomplishing their stated goals. Whereas I do expect that it's possible to moderate a given platform decently well if the operator is made to care.

                I'm still not clear on how they're supposed to do that.

                The general shape of the problem looks like this:

                If you leave them to their own devices, they have the incentive to spend a balanced amount of resources against the problem, because they don't actually want those users but it requires an insurmountable level of resources to fully shake them loose without severely impacting innocent people. So they make some efforts but those efforts aren't fully effective, and then critics point to the failures as if the trade-off doesn't exist.

                If you require them to fully stamp out the problem by law, they have to use the draconian methods that severely impact innocent people, because the only remaining alternative is to go out of business. So they do the first one, which is bad.

        • Nursie 16 hours ago

          > The premise is that you admit the huge corporations with billions of dollars don't have the resources to pay moderator

          My contention is more that they don’t have the will, because it would impact profits and that it’s possible that if they did implement effective moderation at scale it might hurt their bottom line so much they are unable to keep operating.

          Further, that I would not lament such a passing.

          I’m not saying tiny forums are some sort of panacea, merely that huge operations should not be able to get away with (for example) blatant fraudulent advertising on their platforms, on the basis that “we can’t possibly look at all of it”.

          Find a way, or stop operating that service.

          • AnthonyMouse 15 hours ago

            > My contention is more that they don’t have the will, because it would impact profits and that it’s possible that if they did implement effective moderation at scale it might hurt their bottom line so much they are unable to keep operating.

            Is the theory supposed to be that the moderation would cost them users, or that the cost of paying for the moderation would cut too much into their profits?

            Because the first one doesn't make a lot of sense, the perpetrators of these crimes are a trivial minority of their user base that inherently cost more in trouble than they're worth in revenue.

            And the problem with the second one is that the cost of doing it properly would not only cut into the bottom line but put them deep into the red on a permanent basis, and then it's not so much a matter of unwillingness but inability.

            > I’m not saying tiny forums are some sort of panacea, merely that huge operations should not be able to get away with (for example) blatant fraudulent advertising on their platforms, on the basis that “we can’t possibly look at all of it”.

            Should the small forums be able to get away with it though? Because they're the ones even more likely to be operating with a third party ad network they neither have visibility into nor have the leverage to influence.

            > Further, that I would not lament such a passing.

            If Facebook was vaporized and replaced with some kind of large non-profit or decentralized system or just a less invasive corporation, would I cheer? Probably.

            But if every social network was eliminated and replaced with nothing... not so much.

            • pferde 14 hours ago

              Smaller forums are more likely to handle moderation effectively and in a timely manner. I frequent a few such forums, and have seen consistently good moderating for many years.

            • Nursie 14 hours ago

              > the cost of paying for the moderation would cut too much into their profits?

              This one. Not just in terms of needing to take on staff, but it would also cut into their bottom line in terms of not being able to take money from bad-faith operators.

              > And the problem with the second one is that the cost of doing it properly would not only cut into the bottom line but put them deep into the red on a permanent basis, and then it's not so much a matter of unwillingness but inability.

              Inability to do something properly and make a commercial success of it, is a 'you' problem.

              Take meta and their ads - they've built a system in which it's possible to register and upload ads and show them to users, more or less instantly with more or less zero human oversight. There are various filters to try and catch stuff, but they're imperfect, so they supply fraudulent ads to their users all the time - fake celebrity endorsements, various things that fall foul of advertising standards. Some just outright scams. (Local family store you never heard of is closing down! So sad! Buy our dropshipped crap from aliexpress at 8x the price!)

              To properly, fully fix this they would need to verify advertisers and review ads before they go live. This is going to slow down delivery, require a moderate sized army of reviewers and it's going to lose them revenue from the scammers. So many disincentives. So they say "This is impossible", but what they mean is "It is impossible to comply with the law and continue to rake in the huge profits we're used to". They may even mean "It is impossible to comply with the law and continue to run facebook".

              OK, that's a classic 'you' problem. (Or it should be). It's not really any different to "My chemical plant can't afford to continue to operate unless I'm allowed to dump toxic byproducts in the river". OK, you can't afford to operate, and if you keep doing it anyway, we're going to sanction you. So ... Bye then?

              > Should the small forums be able to get away with it though?

              This is not really part of my argument. I don't think they should, no. But again - if they can't control what's being delivered through their site and there's evidence it contravenes the law, that's a them problem and they should stop using those third party networks until the networks can show they comply properly.

              > if every social network was eliminated and replaced with nothing... not so much.

              Maybe it's time to find a new funding model. It's bad enough having a funding model based on advertising. I's worse having one based on throwing ad messages at people cheap and fast without even checking they meets basic legal standards. But here we are.

              I realise this whole thing is a bit off-topic as the discussion is about age-verification and content moderation, and I've strayed heavily into ad models....

      • account42 12 hours ago

        Throwing a lot of manpower at moderation only gets you lots of little emperors that try to enforce their own views on others.

  • SchemaLoad 21 hours ago

    I'm split on it. 100% agree that kids being off social media is better for society. But I can't see how it could be enforced without privacy implications for adults.

    • 2muchcoffeeman 19 hours ago

      Don’t buy them devices and lock down your computer and networks.

      I guess if a teenager is enterprising enough to get a job and save up and buy their own devices and pay for their own internet then more power to them.

      • SchemaLoad 19 hours ago

        Obviously an impossible task. Kids need computers for school, and every school provides laptops. Kids don't need access to social media.

        • 1718627440 16 hours ago

          No? This was how it was for me. And the only downside was, that all the other kids are glued to their smartphones.

          Why is this even controverse. Is there any rational reason why kids should have smartphones? The only reason I see is to let the big companies earn money and because adults don't want to admit, that they are addicted themselves.

    • fc417fc802 18 hours ago

      Perhaps enforcement at the user end isn't what's needed. A perfect solution is likewise probably unnecessary.

      As but one possible example. Common infrastructure to handle whitelisting would probably go a long way here. Just being able to tag a phone, for example, as being possessed by a minor would enable all sorts of voluntary filtering with only minimal cooperation required.

      Many sites already have "are you 18 or older" type banners on entry. Imagine if those same sites attached a plaintext flag to all of their traffic so the ISP, home firewall, school firewall, or anyone else would then know to filter that stream for certain (tagged) accounts.

      I doubt that's the best way to go about it but there's so much focus on other solutions that are more cumbersome and invasive so I thought it would be interesting to write out the hypothetical.

      • SchemaLoad 18 hours ago

        Yeah that seems pretty reasonable. Apple and Google could extend their parental controls to send a header or something flagging an under age user, for sites to then either block, or remove social elements from the page.

        Seems like right now the Aus Government isn't sure how they want it to work and is currently trialing some things. But it does seem like they at least don't want social media sites collecting ID.

      • ptek 15 hours ago

        Are you 18 or older?

        You don’t get that notification show up when you buy alcohol or cigarettes at a shop, would have been easier being a minor buying beer. The porn companies know what they are doing or they would create a adults robots.txt and published a RFC. Hope they won’t ask for age verification for the shroomery

  • bigfatkitten a day ago

    If you read through the issues that ASIO says they are most concerned about, it’s clear that companies like Meta have a lot to answer for.

    https://www.intelligence.gov.au/news/asio-annual-threat-asse...

    • hilbert42 21 hours ago

      We don't need ASIO to tell us that. The real problem is that early on when Big Tech first took a stranglehold of the internet in the early 2000s that governments failed to regulate, they did SFA despite the warning signs.

      At the time it was obvious to many astute observers what was happening but governments themselves were mesmerized and awed by Big Tech.

      A 20-plus year delay in applying regulations means it'll be a long hard road to put the genie back in tbe bottle. For starters, there's too much money now tied up in these trillion-dollar companies, to disrupt their income would mean shareholders and even whole economies would be affected.

      Fixing the problem will be damn hard.

      • BLKNSLVR 21 hours ago

        Even harder now that the US President is siding with the tech broligarchy since it aligns perfectly with the America First ideology.

        (It may be the last thing that the US has the world lead on)

        It's also why legislation protecting privacy and/or preventing the trade of personal information is almost impossible: the "right" people profit from it, and the industry around it has grown large enough that it would have non-trivial economic effects if it were destroyed (no matter how much it thoroughly deserves to be destroyed with fire).

    • marcus_holmes 19 hours ago

      I just read through that, and it doesn't even mention Meta. Why do you think this is about "companies like Meta"?

southernplaces7 14 hours ago

What is it with some of the anglo countries and these ridiculous slides into nannying, vaguely repressive surveillance. It's not even much useful for real crime fighting, as the case of the UK amply and frequently demonstrates.

  • florkbork 13 hours ago

    https://parlinfo.aph.gov.au/parlInfo/search/display/display....

    Read the legislation. Ask yourself if it's better for a country's government or a foreign set of social media companies to control what young people see. One has a profit motive above all else. One can be at least voted for or against.

    • baobun 12 hours ago

      Parents seem like the appropriate authority for minors?

      Neither should control adults.

    • Pooge 12 hours ago

      One has a profit motive influenced by legislation/regulation put in place by the other, which the latter seems to have no interest of doing for the last 20 years.

  • jgaa 13 hours ago

    It's what happens when the people governing is terrified about the people they govern.

  • lioeters 12 hours ago

    > vaguely repressive surveillance

    It's authoritarianism, and frankly paving the way for fascism. People are already getting visits from the police for unsavory Facebook posts. Be careful not to criticize your government online, because soon every post will be instantly judged by an AI system and you'll be flagged as a disobedient citizen in need of a bit of the old boot.

Palmik 18 hours ago

It's interesting how all countries work in tandem implementing these measures. UK, EU, some US States and now Australia all require or will soon require age verification under certain conditions.

It seems like it would make more sense to implement it at the browser level. Let the website return a header (ala RTA) or trigger some JavaScript API o indicate that the browser should block the tab until the user verifies their age.

  • riffraff 18 hours ago

    I think lawmakers gravitate towards "required identification" because 1) it's easier to put blame on a single website than on whatever browser + the websites 2) it matches th experience of age restriction for movies and magazines, where age is enforced by whoever sells you the thing or allows access 3) client side restrictions seem easier to circumvent 4) some lawmakers probably think grown ups shouldn't watch porn either.

    IMO an "ok" solution to the parents' requirements of "I want my kids to not watch disturbing things" might be to enforce domain tags (violence, sex, guns, religion, social media, drugs, gambling, whatever) and allow ISPs to set filters per paying client, so people don't have to setup filters on their own (but they can).

    But it's a complex topic, and IMO a simpler solution is to just not let kids alone in the internet until you trust them enough.

ethan_smith a day ago

Australia's been down this road before with the failed 2019 age verification bill and the Online Safety Act. The technical implementation challenges are enormous - from VPN circumvention to privacy risks of ID verification systems.

  • frollogaston a day ago

    Well the age assurance is only for logged-in users, so they can just log out.

    • postingawayonhn a day ago

      The article doesn't quite spell it out but I assume you won't be able to turn off safe search unless you log in with a verified 18+ account.

      • frollogaston a day ago

        It does say "default" which implies you can turn off the filter, but yeah it's not very clear and does make a big difference. For example, YouTube already won't let you view flagged content without signing in.

      • SchemaLoad 21 hours ago

        This is how Youtube works now. Age restricted videos can't be viewed without logging in.

        • ptek 15 hours ago

          Yeah. Can’t watch king of the streets or some video game trailers.

        • _Algernon_ 12 hours ago

          Ah yes. How terrible would it be if an angsty teen watched the music video to Linkin Park's Numb. Truly the end of the world.

    • hilbert42 21 hours ago

      Unfortunately, two problems with that approach, Goolge with fingerprinting, cookies, IP addresses etc. will still know who you are. Second, even in the rare event that you are able to make yourself anonymous then the search results you're dished up can be filtered without your knowledge.

      That would have the same effect.

    • bigfatkitten a day ago

      You’d have a hard time finding a 12+ year old who doesn’t know what Incognito mode in Chrome is for.

    • _aavaa_ 21 hours ago

      Ahh yes, a technological solution to a political problem.

    • senectus1 a day ago

      hmmm another slef hosting service to knock up. proxied search engine.

  • bamboozled a day ago

    I guess they should just succumb to the US big tech machine without trying anything. I get the sentiment thought, maybe doing something that won't work is worse than doing nothing.

amaterasu 21 hours ago

The co-leads on drafting the code are rather interesting:

> Drafting of the code was co-led by Digital Industry Group Inc. (DIGI), which was contacted for comment as it counts Google, Microsoft, and Yahoo among its members.

  • fc417fc802 20 hours ago

    Do you suppose this is born of a desire to more easily identify people, or primarily as a regulatory fence to prevent upstart competitors? Perhaps both?

  • shirro 13 hours ago

    Yes. As usual people commenting based on their biases instead of comprehending the text. This is a proposal made by predominantly US companies (a country that actually has mandatory proof of age to access digital services in several states) to a US born eSafety commissioner who previously worked for Microsoft, Adobe and Twitter.

    Not really sure what this has to do with the Australian government or Australian people. We can't even properly tax these foreign companies fairly. If we did try to regulate them the US government would step in and play the victim despite a massively one sided balance of trade due to US services being shoved down our throats. We need to aggressively pursue digital sovereignty.

Cartoxy 17 hours ago

Aims to protect kids online, but it could easily go too far. It covers way more than just search engines—pretty much anything that returns info, including AI tools.

It pushes for heavy content filtering, age checks, and algorithm tweaks to hide certain results. That means more data tracking and less control over what users see. Plus, regulators can order stuff to be removed from search results, which edges into censorship. Sets the stage for broader control, surveillance, and over-moderation. slowburn additions all stack up. digital ID ,NBN monopoly ISP locked DNS servers . TR-069 etc etc. Hidden VOIP credentials. Australia is like the west's testing ground this kind of policy it seams.

ratchetgo1 20 hours ago

Grand Fascist State Censor Julie Inman Grant strikes again. Another disgraceful loss of privacy for the country defining anglophone technological totalitarianism.

  • tjmc 16 hours ago

    "eKaren" is shorter

  • ActorNightly 19 hours ago

    Meh, this is minor political fluff. Australia is still doing quite good.

    • Cartoxy 16 hours ago

      in the digital rights and government spying department --- maybe VS china or Nkorea but in the "west" we are profanely the worst. easily.

      • account42 12 hours ago

        Don't worry, the rest of the west is doing its best to catch up with you.

      • defrost 12 hours ago

        You're asserting AU TLA's are outperforming the UK's GCHQ et al., the US's NSA and friends, the private company Palantir, Isreal's Unit 8200, etc?

        Might want to wind back that Aussie Exceptionalism a notch or three. That or read up a little more.

shirro 15 hours ago

This looks like a voluntary industry code of conduct made by US companies Microsoft, Google etc. I am not aware of any legislation that would require this in Australia. If the commissioner thinks the industry codes are insufficient she might advise the government that a legislative approach is required but she is not an Australian politician and was not elected by anyone here.

The eSafety commissioner is an American born ex-Microsoft, Adobe and Twitter employee who was appointed by the previous conservative government. I wouldn't be so sure her values are representative of the so-called Australian nanny state or the Australian Labor Party.

Sevrene a day ago

I’m an Australian who values privacy and civil liberties more than most I meet.

While I yearn for the more authentic and sincere days of the internet I grew up on, I recognize very quickly by visiting x or facebook how much it isn’t that, and hasn’t been for a long time.

I think this bill is a good thing and I support it.

  • SturgeonsLaw 15 hours ago

    I’m an Australian who values privacy and civil liberties more than most I meet, and that's why I think this bill is horrible, is full of unintended consequences, and will be worked around by kids who care to do it.

  • hilbert42 21 hours ago

    "I’m an Australian who values privacy and civil liberties more than most I meet."

    Same here. Early on, if I found a site interesting I'd often follow its links to other sites and so on down into places that the Establishment would deem unacceptable but I'd not worry too much about it.

    Nowadays, I just assume authorities of all types are hovering over every mouse click I make. Not only is this horrible but it also robbs one of one's autonomy.

    It won't be long before we're handing info that was once commonplace in textbooks around in secret.

  • fc417fc802 18 hours ago

    Aren't privacy and civil liberties fundamentally at odds with centralized government issued ID checks? How can you claim to value the former while supporting a plan to require the latter?

    In the days before electronics were endemic, physically checking a photo ID didn't run afoul of that as long as the person checking didn't record the serial number. But that's no longer the world we live in.

  • marcus_holmes 19 hours ago

    I don't understand why you think this bill and that phenomenon (the fact that Xitter or Facebook aren't like the old days of the internet) are connected, can you explain why you think this, please?

  • veeti 18 hours ago

    Evidently the bar for valuing such things is set very low in Australia.

  • g-b-r 19 hours ago

    This is the account's first message here in two years

  • frollogaston 20 hours ago

    The AI-based version of this looks fine, the ID checks are odd though

  • theshackleford 15 hours ago

    >I think this bill is a good thing and I support it.

    Uhuh.

    >I’m an Australian who values privacy and civil liberties more than most I meet.

    No you're not.

  • Nasrudith 17 hours ago

    Are you sure you value privacy and civil liberties then if you fall for "Think of the Children" bollocks instead of wanting to throw politicians down wells to protect children from living in a dystopia?

eidorb a day ago

Minor’s accounts must also revoke “sign out” functionality in case they see some titties.

  • HKH2 a day ago

    > However, users who are not logged in should also expect “default blurring of images of online pornography and high-impact violence material detected in search results”.

  • SoftTalker 20 hours ago

    To be fair, most of the concern is about stuff that's far more hard-core than "titties"

bn-l 13 hours ago

> Age assurance methods can include age verification systems, which use government documents or ID; age estimation systems, which typically use biometrics; and age inference systems, which use data about online activity or accounts to infer age.

Oh how convenient.

ggm 18 hours ago

Homomorphic encryption and third parties. No need for government eyes to know axiomatically which 100pts ID verified which login, nor website or search engine to know who the real person is.

Most legislation aims to create the offence of misleading, not actually stamp out 100% of offenders. Kids who get round this will make liabilities for themselves and their parents.

yakshaving_jgt 18 hours ago

As an Australian citizen, this further reinforces my position that the greatest trick the devil ever pulled was convincing the world that Australia is a laidback country full of easygoing people.

It isn’t. For as long as I can remember it’s been wildly authoritarian, and it seems Australians harbour a fetish for the rules that would make even the average German blush.

Hopefully times have changed (though I don’t think they have), but about 20 years ago, standard fare on the road was to provide essentially no driver training, and then aggressively enforce draconian traffic rules. New drivers can’t drive at night. New drivers have to abide by lower speed limits than other drivers. Police stop traffic for random breathalyser tests. “Double demerit” days…

This seems like more of the same. Forget trying to educate the population about the dangers of free access to information (which they will encounter anyway). Just go full Orwell! What could go wrong!

azov 18 hours ago

I wonder if technical complexity of implementing online age checks is about the same as implementing a robust direct democracy system - one where people can vote down bad laws instead of outsourcing those decisions wholesale to politicians they don’t even like?..

  • ggm 18 hours ago

    I predict Lower.

    Unrelated, but why I don't agree:

    The systems which permit voting down stupid laws also permit voting down good laws. This is very "be careful what you wish for" and reductive to "the voter is always right even when they want stupid things" interpretation of democracy.

    E.g. Swiss cantons opposing votes for women inside the last 2 decades.

    • azov 18 hours ago

      Well, direct democracy already exists in various forms (e.g., referendums, propositions on California ballots, etc.). Sometimes bad decisions are made, but I wouldn’t call it a total disaster. Can it be improved through technical means? How much improvement would it take for it to be better than the status quo?

    • _Algernon_ 18 hours ago

      They don't have to be always right, just be right more often than a representative democracy.

jauntywundrkind 21 hours ago

What an awful sad fall for us all, from such lofty heights of possibility for technology, to a seemingly endless age of both humans being exploited and mechanized by technology and governments doing only the saddest most important useless clutching of pearls fear responses that do nothing to coax the world towards better.

Apologies. I'm already pretty morose over the USA Supreme Court allowing age verification, which although claiming to target porn seems so likely to cudgel any "adult" or sexual material at all.

Until recently the Declaration of Independence of Cyberspace has held pretty true. The online world has seen various regulations but mostly it's been taxes and businesses affected, and here we see a turn where humanity is now denied access by their governments, where we are no longer allowed to connect or to share, not without flashing our government verified id. It's such a sad lowering of the world, to such absolutely loser politicians doing such bitter pathetic anti governance for such low reasons. They impinge on the fundamental dignity & respect inherent on mankind here, in these intrusions into how we may think and connect.

Links for recent Texas age verification: https://www.wired.com/story/us-supreme-court-porn-age-verifi... https://news.ycombinator.com/item?id=44397799

aucisson_masque 16 hours ago

> Search engines will not be required to implement age assurance measures for users who are not logged in to their services, according to the new rules.

protocolture a day ago

Stupid bipartisan authoritarian bs, so basically a normal day for the australian government.

9283409232 20 hours ago

This is very simplistic but at a certain point I feel like parents should just be better parents and take responsibility for what their children do online in their home.

g42gregory 17 hours ago

Freedom and democracy in action.

incompatible a day ago

Do they realise that some of us may be using computers that don't even have a camera, and open source software that could in theory upload any image we like?

  • Ycros 20 hours ago

    It uses a video feed and asks you to look in certain directions. At least the one instance I've encountered did.

    • hsbauauvhabzb 12 hours ago

      Yeah. Certainly something AI generated video couldn’t solve.

      • someNameIG an hour ago

        It shouldn't be to difficult to determine if the camera is pointed at a real face vs a screen showing an AI generated image.

nenadg 16 hours ago

ah another one from the series of govt ideas so good that they have to be enforced

BLKNSLVR a day ago

Nice to see the ACS implementing their own dark patterns in making the "Close" text in the top right of their full screen pop-up light-grey and thus difficult to find.

/s

  • glaucon 20 hours ago

    Yep, it took me a while to find that.

pevansgreenwood a day ago

With search moving from Google & MS to TikTok ET al, is this shutting the barn door after the horse has bolted?

t0lo a day ago

As an australian citizen i'm all for it. Look at how the internet and social media has destroyed our current youth and their naivety and sense of emotional security. They all act like they're living in soviet russia at this point and have become so hard and jaded.

Better I give a little bit of pii than some kid grows up too early.

Would you be able to tell the difference if this policy came from a place of compassion?

  • abtinf a day ago

    > They all act like they're living in soviet russia

    Nothing says “not living in Soviet Russia” like having to show your papers to access information.

    • jp0d a day ago

      what's the alternative? Is it really information or misinformation?

      • knifie_spoonie a day ago

        Education.

        I really wish all this time, effort, and money was spent on educating our kids to safely navigate the online world.

        It's not like they'll magically figure it out for themselves once they turn 17.

        • jp0d 21 hours ago

          Totally agree with this.

      • frollogaston a day ago

        I remember when any anti-Iraq-invasion material was considered "misinformation" in the US. Wonder how it went in Australia, since they were also very involved.

        • defrost 21 hours ago

          My recollection of the time is that most citizens that paid attention and a majority of the politicians in the UK and AU were fully aware the "intel" was sketchy and the motivations impure .. the debate was less about the information quality and more about the obligation to partner with the US in the invasion.

          The UK PM and the AU PM backed the US position and sent troops in (in the AU case they even sent in advance rangers | commandos | SASR to scout and call targets from ground) but they were both aware the "justification" and WMD claims were BS.

          • dfxm12 21 hours ago

            So some government officials were probably in the pocket of Halliburton (i.e., just like the US government) while selling a weak justification to the public.

            https://www.greenleft.org.au/content/halliburton-australia-p...

            • defrost 20 hours ago

              Such things play a part, of course, however at a nation level the first order consideration would have been ANZUS like defence agreements and a sense that ongoing regional support from the US rested on Australian support for the US, right or wrong.

              Been ongoing for a while now: https://roncobb.net/img/cartoons/aus/k5092-on-Tucker_Box-cuu...

              • marcus_holmes 20 hours ago

                This. Whether the USA had a mandate to go into Iraq wouldn't have been questioned. Australia jumped in because we always jumps in to whatever bullshit war the USA dreams up. For some reason we see it as an obligation to support our allies in all their wars, even when we think their reasons are ridiculous and even when we know they won't support us in return.

                This has lead to serious problems in the case of the Afghan war, where it was clear that this whole conflict had nothing to do with Australia, could not even vaguely be construed as "defence", achieved nothing, cost Australian lives, and was a completely fabricated mess that we got into for really bad reasons (I paraphrase). The SAS war crimes thing was a symptom of our unease at our involvement (imho) - we would not normally question the things that soldiers do in conflict, this was more a way of questioning why we were in the conflict in the first place.

          • toyg 16 hours ago

            The UK PM, Tony Blair, actually pushed the "45 minutes" fabrication. Some of his MPs might have been sceptical, but Blair was very clearly itching to be a wartime PM.

            What you describe is more like the debate on continental Europe, which translated in little support (most countries provided help with logistics and minimal "peacekeeping").

          • palmfacehn 16 hours ago

            My anecdotal, non-Aussie observation: Yes, doubts over the WMD debacle were shouted down as nutty conspiracy theory. The usual rhetoric was employed, "If such a wide ranging conspiracy were truly afoot, wouldn't someone blow the whistle?"

            Afterwards the same people who employed this rhetoric claimed they, "Always knew the claims were false".

            There was definite risk of loss of political capital for would be dissenters. Politicians may or may not have had skeptical reservations. It is moot point if they didn't proactively dissent. Similarly, it isn't especially meaningful in the context of this discussion if those who did dissent were locked out of popular media discourse. The overall media environment repeated the claims unquestioningly. Dissent was maligned as conspiracy theory.

            Another interesting manifestation were those who claimed that WMDs were found. Clearly the goal posts were shifted here. Between those who were "always suspicious" and those who believe that the standards of WMDs were met, very few people remain who concede that they were hoodwinked by the propaganda narrative. Yet at the same time, it isn't a stretch to observe that a war or series of wars was started based on false premises. No one has been held to account.

      • abtinf a day ago

        > misinformation

        Nothing screams “not living in Soviet Russia” like having a ministry of truth.

        • jp0d a day ago

          > “not living in Soviet Russia”

          Nothing screams "fear mongering" like comparing with living in Soviet Russia.

          Look, we can argue all day. There is no right or wrong answer. I don't fully support the govts initiative but I also don't want Meta/X/Google to have unlimited powers like they do in the US.

          • fc417fc802 20 hours ago

            > I don't fully support the govts initiative but I also don't want Meta/X/Google to have unlimited powers like they do in the US.

            Various large US tech companies played a central role in drafting this initiative. I don't think you're reasoning about this clearly.

            How exactly does this curtail their powers?

          • dfxm12 21 hours ago

            Can you explain how limiting a regular citizen's freedom constrains Meta/X/Google's power?

        • bamboozled a day ago

          So is being fed propaganda 24/7, the KGB seems to be winning by reading some of these comments.

          I don't see kids being banned from reading history books, which would be more like the world you're describing, I see a country which is pretty multicultural and open minded trying it's best to protect itself from the absolute nonsense that circulates online. When I was a kid, I could only watch certain TV shows because my bed time was 7:30-8pm, that's when the "naughty stuff" came on TV. Was that the ministry of truth at work?

          Do you have any idea what kids are exposed to now ? I mean the answer is probably, no, you have no idea. But judging by the rot I see my younger friends and family members watch and regurgitate, I can tell you, it's not great.

          • t0lo 21 hours ago

            yep mutual deligitimation and hasbara are operating in full force. over 80 countries have "cyber troops" - there are so many countries trying to destroy the social fabric of the west. why shouldn't we shield our children who have no way of understanding or protecting themselves from it. plus the fact that the "thought and ideological leaders" of this generation have no thoughts or coherent ideologies is pretty telling.

      • bamboozled a day ago

        We failed to give the kids the skills to think critically (because that's not in the ruling classes best interest), so now, to keep the population under some form of governability, information has to be restricted so people don't end up destroying their own society. Nice.

        I agree though, most information is misinformation, even the most popular stuff, Joe Rogan et al.

        • Dylan16807 21 hours ago

          Critical thinking lessons are not enough to protect kids.

          • bamboozled 19 hours ago

            You’re right, I misspoke, kids should be off the phones and internet until a certain age but while they’re offline need to be prepared to deal with the onslaught of rubbish they will face when they’re online. Including AI generated nonsense.

  • jp0d a day ago

    As an Australian citizen I'm not fully in favour of this. But I think I agree that we need some protection from companies like Meta/Google etc influencing our youth based on the American political "situation".

    • selcuka 13 hours ago

      You are aware that Meta/Google etc are behind this bill, aren't you? They don't want anonymous users. They want fully identified, age-verified ad consumers.

    • Nasrudith 17 hours ago

      So do you keep hemlock on hand just in case Socrates resurrects, too if you are that paralyzed of the youth being influenced by outside opinions?

  • CamperBob2 a day ago

    Better I give a little bit of pii than some kid grows up too early.

    And at no point does it ever occur to you to demand proof that measures such as this will have the desired effect... or, indeed, that the desired effect is indeed worth achieving at all.

    • t0lo a day ago

      Oh no the government and the isp know what the average non tech savvy australian is searching- this is unprecedented!

      I am for anonymous tokens ideally but something is still better than nothing

      • CamperBob2 a day ago

        Oh no the government and the isp know what the average non tech savvy australian is searching- this is unprecedented!

        You probably should have started your censorship campaign with the usual bugaboos -- comics, video games, porno mags -- and not with history books.