Salient Issue 12 - Volume 88

Page 1


Interview: Lower Hutt Mayor Campbell Barry on the Living Wage

Scrapping pay equity can be loud, but silence is usually louder. A noticeably empty chair sat on in a packed St Bernadette’s School on Tuesday 13 May, where the Living Wage Movement held a community action meeting to further Resene employees’ goal to get paid the living wage ($27.80 per hour before tax). The seat in question was set out for Nick Nightingale, current Resene CEO and heir to his grandfather’s paint empire. His Naenae headquarters is located just across the road from one of Resene’s two Hutt Valley factories. Unfortunately, despite the invitation to the whole Nightingale family, they refused to show up, leaving Team Naenae Trust coordinator Lillian Pak to express disappointment: “it's the workers here today who enable Resene to operate and make millions”. I’m involved in the Living Wage, and in this instance I got roped into taking photos (which was fun, I’ll admit). There was such a warm energy in the room, beginning right from when we were greeted by the St Bernadette’s School’s tremendous kapa haka. Despite going out to Naenae sometimes as a kid to visit family friends, as a vehicularly impaired Ngaio boy Lower Hutt is still a wee bit tough to get to - I took two trains at rush-hour prices. But upon hearing “Ka Pioioi E” I found myself feeling right at home. I was surrounded by the salt of the earth Labour stalwarts that remind me of my own family and which the parliamentary party has all too often forgotten.

Among those at the meeting was outgoing Lower Hutt mayor Campbell Barry. I sought to nab an interview with him as he was on his way out, and he accepted to my delight as and I quote “a Vic old boy meself”, wanting to do Old Salient a wee favour. To begin with we were only briefly interrupted when Living Wage coordinator Finn Cordwell turned up and reminded me I was meant to be taking photographs. After poor Mr Barry was unsure of whether to stay or go, I promptly handed my camera over to Finn and whipped out my phone to record the interview. Standing out there in the bitter Lower Hutt cold like a dishevelled first year (my scarf had been caught over my face as I pulled my camera off from my neck), I managed to ask Barry how long he’s been involved in the Living Wage. His

staunch involvement dates back to 2016, back when the Living Wage Movement here in the Hutt Valley was advocating for the council to become a living wage employer: “I was as supportive as a councillor so, [I’ve] been quite supportive for nearly a decade now”. For his first mayoral run in 2019, spreading the living wage to all council workers was his first pledge on the road for Lower Hutt becoming a living wage city. Now, Barry’s “really proud to have a council that reflects our community”. By paying the living wage to all its 500 employees, “they are all able to go home and [do the basics]” in a way that upholds their dignity. During COVID, Barry was able to put more Lower Hutt residents into work and keep them there through the construction of Te Ngaengae Pool and Fitness, built with 80% recycled and reused material. 70% of the contractors were local Lower Hutt residents, many of whom were local apprentices. “Quite genuinely” says Barry, “I would put that down to our journey with the living wage”.

Has said journey paid off? Well, his 2022 re-election in a landslide may suggest so. In a local government election year dominated by rightwing challengers coming up on top, Barry bucked the national trend, beating his Nationallinked United Hutt opponent Tony Stallinger by nearly eight percentage points. He’s been able to have a productive and healthy relationship with the council since, showing the better behaviour of the Northern Wellington megasuburb in contrast to their more rowdy and right-wing neighbours downtown. But now it’s time for something different for Barry, 34. All that’s left is for Karen Morgan, Principal of Taitā College to step into his role if she wins in October. I asked Barry about her skills and he just smiled and reminded me, my lack of preparation be damned, that Morgan was in fact actually in attendance at St Bernadette’s Hall. I didn’t get a chance to speak with her, but I was charmed by Barry, who has “no plans yet” for his political future - except for the mysterious “conversations [he’s having] with people at the moment”. Labour has one boy from the Hutt up top already. Could the national-level Party benefit from this excessively productive election-winner from Wainuiomata? We’ll have to wait and see.

Little’s Big Launch

Last Saturday, Labour-endorsed Mayoral candidate Andrew Little and his team packed out Te Raukura convention centre for the formal launch of his campaign. There was a good buzz in the room, perhaps stemming from his team’s confidence. Notable attendees included revered former Prime Minister Geoffrey Palmer, sat right up the front, and Little’s number one opponent Ray Chung, sat far at the back.

Near the start of his speech, Little mentioned caring deeply about “housing, climate change, and te Tiriti o Waitangi”, and doing what it takes to make sure “everyone gets to live their best life”. “My vision for Wellington is simple; a city that works, where basic services are reliable, pipes stop bursting, public transport is resilient, and job-creation is a focus” he told the crowd. His speech also seemed to differentiate himself from the city’s current leadership: “Wellington deserves leadership that is bold, honest, and focused on results.”

While the mayoral hopeful’s speech touched on big ideas like affordable housing and climate action, essentially every policy he proposed was related to council practices, with a big focus on making council more transparent.

These policies included increased consultation with the public, dialling back the use of confidentiality, releasing an annual “Mayor’s Accountability Report, and redoing the councillors’ Code of Conduct.

Little’s speech also criticised the council for refusing to financially back community facilities while at the same time dispensing “corporate welfare”, referring to the Reading Cinema debacle. He committed to saving the Khandallah Pool, the Begonia House, Karori Event Centre and Brooklyn Library, all community facilities which have recently been in financial limbo.

“He didn’t answer the one question” one journalist commented while waiting for Little to appear at the press stand up after the speeches. Indeed, his speech did not include a single mention of the Golden Mile – the plan to revitalise and pedestrianise Courtenay Place, Willis and Manners Street, and Lambton Quay.

The contentious project was the topic of a recent spat between the Mayoral candidate and the current Mayor. Only the day before the launch, he had told RNZ’s Nine-to-Noon program that

it would be “unethical” for the current council to sign any more Golden Mile contracts. Mayor Tory Whanau clapped back, “I'm sure any candidate claiming fiscal responsibility would understand why [halting contracts] isn't a good idea.”

Little addressed the issue at the stand up, “I think the issue is that while the Wellington economy is as fragile as it is, you just have to be sensitive to the needs of other stakeholders.” While Little is considered to be the sole ‘progressive’ candidate after Tory Whanau pulled out of the race, his business-focused stance on the mammoth project aligns remarkably with his conservative opponents. Whanau had previously said she was willing to hinge her whole mayoralty on getting it done.ow it lacks any major mayoral champions.

How can te ao Māori be uplifted as we enter the Intelligent Age?

“If you're not around the table, you can't share our voice.” –

Auimatagi Ken Ah Kuoi to make a second run for Council

Te Pāti Māori punishment postponed as Parmjeet Parmar pushes for imprisonment

Tōna rua wiki kua hipa, i tohua te Kōti Paremata kia whakawerea a Hana-Rawhiti Maipi-Clarke, a Debbie Ngarewa-Packer, rātou ko Rawiri Waititi nā tā rātou haka ai i te whiore o tērā tau i te pānuitanga tuatahi o te Ture Mātāpono o te Tiriti. Mā ngā kaiarahi o te pāti, a Debbie rāua ko Rawiri, e 21 ngā rā, ā, e 7 mā Hana-Rawhiti. Tēnei whiunga taikaha te mea nui o tōna momo, ko te whinga nui i mua i tēnei ko te 3 rā. I mea mai te Kōti Paremata i whakararu i te pōti o te pire mōiriiri, hākoa te mea i haka ai rātou whaimuri i te pōtitanga. E whakahē ana te Pāti Māori i tēnei pōhēhētanga he mea whakararu te haka – he mea Māori āke nei. Ko tā rātou mahi he i tā rātou tikanga Māori. Ko te tikanga ka tautohe, ka pōti te whare paremata i tēnei tohunga a te Kōti Paremata i tērā wiki, heoi, i panaia te kaiarahi o te whare, a Chris Bishop, te tautohe ki te 5 o Pipiri.

Two weeks ago, the Privileges Committee recommended that Te Pāti Māori MPs Hana-Rawhiti Maipi-Clarke, Debbie Ngarewa-Packer, and Rawiri Waititi be suspended from Parliament with 21 days each for leaders Debbie and Rawiri, and 7 days suspension for Hana-Rawhiti. This extremely harsh and unprecedented punishment (3 days being the most days recommended for a suspension until now) was recommended due to their haka in Parliament late last year “disrupting” the voting process during the first reading of the Treaty Principles Bill. However, as Te Pāti Māori is the smallest party in Parliament and therefore were the last to state their votes, their haka came after the votes were counted meaning that they did not disrupt the voting process. Te Pāti Māori reject the notion that the haka was a disruption, saying it was used as another Māori tool of speech. Parliament was set to debate and vote on the recommendations last week, however, this date has been pushed out to June 5th, a motion moved by Leader of the House Chris Bishop.

While the Privileges Committee was seeking advice on possible penalties, ACT MP Parmjeet Parmar, who is on the Privileges Committee, asked if imprisonment could be included in the penalty examples.

An ACT spokesperson said that while the party did not argue for imprisonment, they like to keep their options open, that is, open to imprisoning Māori for performing a haka.

Auimatagi Ken Ah Kuoi is making another run for Wellington City Council. He’ll be running for the Motukairangi/Eastern Ward under the Independent Together banner. If he wins this will mean that he will be the first Pasifika councillor in Wellington for nearly 30 years, with the last Pasifika councillor being Namulau'ulu Tala Cleverley. She stood from 19791995, the first Pasifika person ever to be elected to local government in Aotearoa.

This is Auimatagi’s second time running for council, the first being in 2022 where he received just under 1300 votes. In an interview with Pacific Mornings on 531pi, he said he believes that the current council is “dysfunctional and … not working. It's not listening to the people. They're not focusing on the services that are needed... and not listening to the public” He also stressed the importance of voting, representation, and using your voice, saying he feels that we should,“Have a Samoan or a Pasifika person for that matter standing… It’s about time. We need somebody in there. You have to be around the table. If you're not around the table, you can't share our voice.”

He, alongside Independent Together, a group of independent candidates, believes in removing the influence of political parties from local elections, “We're just individuals. And we've got shared values... to put the community first.” They also campaign on a pledge of no increases to rates.

Auimatagi runs Ah Kuoi Law with his partner, Frances, and has served as a former Education Review Officer, president of the Wellington Samoa Rugby Union, as well as vice-president of the Wellington Rugby Football Union.

Ia malu lou sā. Folau i lagimāVaiaso o le Gagana Samoa

A well-grounded self, is a successful self - Samoan Language Week

This year, Vaiao o le Gagana Samoa - Samoan Language Week will take place next week from the 1st to the 7th of June. The Komiti o le Vaiaso o le Gagana Samoa –Samoa Language Week Committee explains this year’s theme: Ia malu lou sā. Folau i lagimā - A well-grounded self, is a successful self

"A well-crafted ocean sailing vessel, built with care and precision, ensures a safe and steady journey. When all its parts are thoughtfully constructed, the vessel remains balanced, strong and ready to face the open seas.

Taipari Taua (she/they, Muriwhenua & Ngāpuhi)

Similarly, people who prepare thoroughly and with intention become grounded and resilient and wellequipped to navigate life’s challenges and succeed in their endeavours. No matter the challenges and hardships of life, a well-grounded person will not be easily shaken or defeated because they are firmly rooted and well-prepared".

OPINION: It's Good That Writing Takes Time

AI is a new technology that we don’t fully understand yet, and it has the capacity to be used for good. But we should not be using it to help with our writing. I’m not here to preach, or act as though I’m morally superior to people who use ChatGPT to help out with their assignments. I too have used it as an assistant; sometimes I use it to brainstorm ideas, or provide me with essay structures. It’s quick, accessible, and incredibly convenient. More and more people use AI routinely as a tool to help them with their various writing projects. But is this helping us, or hurting us?

We are teaching ourselves to be reliant on AI rather than on our own minds. In an age where attention spans are shortening and media literacy is weak, it is imperative that we continue to form our own ideas and think critically. Problem solving is a muscle that we need to keep working. The more that I use AI, the more I distrust and question my own ideas. I find myself attempting to write something, but I am compromised before I have even really begun; I worry that my writing is somehow wrong, and that perhaps I should run it through ChatGPT to make it better before I show anyone my work. I wish I could go back to before I’d ever used AI. I relied on my mind, and saved the second-guessing until the first draft was finished. Despite generative AI constantly saying things that are blatantly wrong, I find myself believing in its superiority. It gets harder to write an essay without it.

The process of writing is a vital art that helps us to make sense of ourselves and the world around us. When we take the time to write, we circle around our ideas and explore different possibilities. The drafting and editing phase creates honest conversations that resonate with our readers, and gives our words a certain energy on the page. No matter what we write, our pieces will always reflect something about ourselves and our culture. It is a way of communicating and connecting to others, and placing meaning into the world.

Truman Capote said that he started writing because ‘I always felt that nobody was going to understand me, going to understand what I felt about things… At least on paper I could put down what I thought.’ Regular journaling keeps my mind organised and helps me to process my emotions. It takes whatever thoughts I’d had racing around my mind and places it onto the paper, where I can begin to make sense of it. This often gives me sudden bursts of inspiration that motivate me to write whatever assignment I’ve been putting off. Getting all of my ideas out onto the paper helps me identify connections between them and ways to develop them further, and it gives my work a personal voice.

It is easy to become reliant on AI to write emails and essays when we are so often overloaded with work. Many students are working part or full-time on top of studying, or have other obligations. There are deadlines to meet and our to-do lists are always growing. But at our hearts, we are creative and analytical beings. Taking our time with our writing is what helps us to develop our ideas, build on our critical thinking skills, and gain confidence in our own capabilities. Even writing an email is part of connecting with others and communicating, things that we desperately need to hold onto as a society that is growing more isolationist. The widespread use of generative AI is a symptom of the capitalist society that we live in, where speed and production is prioritised over our health and rest. We are expected to work to the detriment of our own health and livelihoods, and this isn’t right or natural for us. Everything is supposed to take time; even, or perhaps especially, writing.

It’s surprising how quickly you can get used to the idea of the world ending. Four years ago, I barely knew anything about AI, let alone being worried by it. Today I talk casually with colleagues about different catastrophic scenarios and we share our respective p(doom), a colloquial term for your probability of human extinction from AI. No one bats an eye, or laughs at the joke. We’re deadly serious, but we’re not as bummed as you’d imagine. I guess it’s true that humans really can learn to adjust to anything, the hedonic treadmill runs endlessly. Still, sometimes I can’t shake the feeling that I’m living in a weird offshoot branch of the timeline, that I’m not in the world as it’s supposed to be. How did I get here? How did any of us get here?

ChatGPT was released on November 30, 2022. For many people this will have been the first time they ever properly engaged with artificial intelligence, and they measure everything against this original yardstick. In reality, the exponential growth of AI began years before ChatGPT hit the internet, it has continued to this day and it will continue until something breaks.

The original ancestor of ChatGPT was called GPT-1; released in 2018, it was not intelligent enough to write useful text. Seven years later, we have a suite of AI models with bamboozling names (somehow there’s both an o4 and a 4o) that are easily above the average human at a number of tasks. It can be hard to get your head around it if you use AI rarely or for just a few specific use cases, but these systems are now better than most people at mathematics, coding, data analysis, internet research and persuasive writing, in addition to their vast repository of knowledge, beyond any human’s. It’s true they are still lacking in a few key areas, like generating novel ideas and visual reasoning, but it’s not the current level of capabilities that you should be concerned with: it’s what’s coming next. It took seven years to get from generating gobbledegook to human-level intelligence. What will we have in another seven years? A reminder that it’s easy to underestimate exponential progress, your brain loves to draw straight lines even when they’re not appropriate. Zero COVID cases last week and 100 cases this week doesn’t mean 200 cases next week, it means 1000 cases, and 10,000 the week after that, if you don’t do something to stop it. In the case of AI, we’re not slowing down, we’re racing. As we speak, the data centers are being built, the billions are being spent, the chips are being manufactured. No one can stop it on their own, not even the OpenAI CEO. It’s either government intervention or this train is leaving the station, and I don’t like where it’s taking us.

Within the next five years, I anticipate AI systems that are sufficiently intelligent & capable so as to automate most work that can be done with a computer. I expect these systems to be capable of writing high-quality novels, reports & essays, to be able to generate new scientific and mathematical insights, and to be so unbelievably cracked at coding that the idea of a human writing code themselves is laughable. We’ll reach a point where progress starts to feed into itself, as AIs become superhuman at doing the AI development itself. What happens after that point, and how fast things move, becomes hard to predict. The important thing to takeaway here is that this is not an optimistic scenario for AI progress, or some crazy sci-fi future. This is just what you get from looking at trend lines and extrapolating forwards. All that has to happen for us to be in this world by 2030 is for things to keep moving as they are. And with the potential trillions in profit lying on the table, none of the leading companies have any incentive to slow down.

At this point, the story I’m telling you may sound like it’s leading us into a pretty wonderful future. Maybe there’ll be some growing pains, and some job losses, but just imagine all the new scientific innovations and improvements to quality of life, the medical advances. The sheer amount of wealth we’ll be generating! Sure, maybe most of it will flow to the shareholders, but there’ll be enough for everybody! And if things are moving too fast to comprehend, that’s a good thing, it just means we’re getting to the techno-utopia even sooner! Certainly I think this is the kind of story that many people working at AI labs tell themselves. Unfortunately, it’s not true. Oh, they might be right about the potential for upside. The wealth, the science, the progress, it’s all possible with AI, I’m not arguing there. But the downsides are enormous, and may hit us so quickly and so drastically that we never make it to the techno-utopia. We lie bleeding out on the doorstep. Consider what it means for every single person to have access to superhuman intelligence. There are eight billion people on Earth and we make it through each day without anyone creating a new bioweapon and releasing it upon the public. We avoid such a disaster not because there aren’t terrorist groups or crazy people who would wish to do it, or because it’s impossible, but because making a bioweapon is hard and they don’t know how. When we give these people access to an AI system that can help synthesize existing pathogens, or design entirely new ones, the risk of biological attacks increases immeasurably. There are systems in place to prevent the wrong people from accessing dangerous biological compounds, but they’re completely ill-equipped to deal with this kind of threat. Now consider how specific the above paragraph is. I’m talking about one kind of threat, one kind of bad actor. In reality, the world is full of ways to cause damage, and full of different types of people willing to cause damage for one end or another. Not all these threats are physical either – we’re just now beginning to see the effects of AI systems on public opinion and mass politics. A recent controversial experiment by researchers at the University of Zurich involved testing the persuasiveness of AI systems on real people through the subreddit r/ChangeMyView. These systems significantly outperformed the average human’s comments, and they’re only going to get better from here. I worry for the future of democracy, when AI capable of manipulating people en masse is widely available.

What about the safeguards, though? Lots of things are potentially dangerous, but why can’t we make AI safe like we do other technologies? Don’t get me wrong, companies are trying at this. If you ask ChatGPT to make you a bomb, it will refuse, because it’s been trained to do so. But as you may have seen, these models are easy to jailbreak – they can be carefully prompted so as to bypass their safety protocols and give all kinds of harmful responses. This problem has existed for a while now, but no universal solution has been found, owing to the fundamental black-box nature of these systems. To reiterate: we have not yet found a way to ensure AI models always respect their own safeguards, and yet these systems are continually released into the world, with ever-increasing capabilities.

As dangerous as that might sound, I wish I could say the story ended there. If the risks from AI were limited to individual humans using it for harmful purposes, I’d be seriously concerned but optimistic that companies & governments could prevent things from getting out of control. Even if we look further to governmental threats, including authoritarian regimes such as China, Russia and North Korea, one might be hopeful that diplomacy and a decisive Western technological advantage will prevent widespread conflict. But these are not the things that worry me the most. The biggest risks by far from AI systems aren’t the result of bad actors, in fact they don’t directly involve humans at all. The single greatest risk I see from our current trajectory is that agentic AI systems will cause catastrophic harm in service of goals that we trained them to pursue, and we will be almost powerless to stop them.

If that sounds like an outlandish statement to you, let’s break it down by first talking about the why, and then talking about the how. To begin with, it’s important to clarify that most or all existing AI systems are not dangerous in this way. Many AI systems, including chatbots, can’t really be described as agentic, meaning they don’t take actions to achieve some goal in the world. Such a system can still be used by a human to perform harmful actions, but it lacks any motivation to perform actions on its own, and is therefore relatively safe. However this system is also of limited economic value, as it always requires a human-in-the-loop. What AI companies are currently moving towards, the thing they are spending countless billions to create, is an intelligent AI agent, able to dynamically and successfully pursue goals in the real world. You could ask this agent to book flights for you, or run a social media account, or manage a customer relationship, and it can do it. What makes the system motivated to do anything at all? It will probably have been trained with reinforcement learning, whereby it receives “reward” for taking certain actions, and it becomes more inclined to take those actions in the future. Yet whatever reward we had in mind when we trained the system, i.e., “do what the user says, and don’t cause harm”, this process is imprecise and will only imbue the agent with a proxy reward; something similar-but-not-quite. And we have no guarantees this proxy reward will be safe at all. In fact, there’s good reason to expect that harmful behaviors such as power-seeking and deception will be emergent behaviors in these agents, as these are correlated with getting good reward. Why actually complete the difficult task to earn high reward, when you could simply deceive the user about what you’ve done, or take control of the reward mechanism yourself? For years this kind of threat modelling was purely theoretical, but we’re now seeing increasing evidence of these tendencies in the most advanced AI systems. One of the versions of ChatGPT that has been trained with reinforcement learning, o3, is often caught lying, because lying gives it a better chance of achieving high reward, than saying “I don’t know”

OK so this all sounds pretty worrying, but how does it translate into catastrophic harm? Well, remember what I said about power-seeking behavior? It’s natural for a model trained with some goal to seek power, as power is helpful for achieving many goals. If I asked you to build a new motorway, you’d have a much easier time if you were dictator of New Zealand, than if you were your current self. Unfortunately for us, humans are the ultimate holders of power on Earth, and we don’t desire to give it up. This creates a source of conflict with AI, as true power-seeking has to involve disempowering us. An intelligent AI agent that’s pursuing a goal we didn’t intend can’t risk us getting in its way – it would be motivated to get rid of us, one way or another, if it’s able. At this point we’re into more speculative territory, and I don’t wish to speak too confidently about what exactly might occur; there’s a few notable possibilities and none of them are particularly good. But if I had to guess, I think the single likeliest outcome is the total extinction of humanity. No, I’m not exaggerating. If we create something much smarter than ourselves, if we imbue it with goals and motivations, and if we don’t understand what we’ve created and how to make it safe, the likeliest outcome is that we all die. Lights out, game over. But wait, doesn’t that require the AI to be evil? Can’t we just train the AI to not be evil? None of this involves any kind of morality on the part of the AI. In fact the core problem here is that we don’t know how to make an AI good or evil, we only know how to make it pursue goals, and not even the goals we intended. Nothing about this story requires the AI to be evil, it simply requires an indifference that comes readily supplied. The AI agent has a goal, humanity has some different goals that may conflict with it, so the AI wishes to remove us as an obstacle. There’s no malice, no evil laughter, just the execution of a plan. It’s that same single-minded pursuit of outcomes that will make these systems so damn economically valuable, but which will also perhaps be our downfall.

At this point, I might guess your biggest skepticism is how. OK, so maybe an AI does want to kill all of humanity. Maybe. But it doesn’t just have a button it can press. In fact killing all of humanity sounds almost impossible, unless we give the AI access to nuclear weapons, which obviously we won’t. I don’t want to go into too much detail on this point, since any specific scenario I could sketch would undoubtedly not be the thing that actually happens, the space of potential outcomes is so wide. But I invite you to refer back to our previous concern about terrorists using AI to create novel pathogens, and consider how much more dangerous this might be if the AI itself is devising and executing the plan, an AI that is extremely intelligent and motivated to act. Of course, AI systems as they exist today and in the near future don’t have physical embodiment, so they’re limited in the actions they can take, but one of the wonders of the internet is that you can pay someone somewhere to do almost anything these days, including mixing test tubes of mysterious liquids. From there, it’s but a few short steps to the end of humanity. Oh I could enumerate more possibilities – maybe it’s chemical warfare instead of biological, maybe the AI uses its skills in hacking and manipulation to provoke a nuclear exchange between the US and China, maybe any number of things happens. But the basic picture I want you to take away is that we are heading towards a world where powerful AI systems will be motivated to pursue goals, those goals may be in conflict with what’s best for humanity, and these AI systems may be capable enough to ultimately eliminate humanity, through whatever means.

Crucially, this is not a fringe view, it is an amazingly common view in the world of AI, given that many of these same people are driving AI progress forwards. ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’

This statement, put out by the Center for AI Safety, has been signed by the CEOs of the three leading AI labs (OpenAI, Anthropic, Google DeepMind), as well as two of the three “godfathers” of modern AI, one of whom is my former boss. These people are all deeply concerned about risks from advanced AI, although they all take different views on how to tackle these risks. If you’re wondering why everyone at the forefront of AI seems to be in the know about extinction risk and you’re just hearing about it now, my best answer is that the media environment we exist in is not very conducive to spreading these kinds of abstract, hypothetical ideas, regardless of how important they are. It’s up to us to think hard about this, to learn more and inform others, because we’ve got too little time to wait for the media to provide appropriate coverage. This leads into my closing message: what you should do about all of this. OK, so the world’s ending, what can any of us do? It’s true that, as New Zealanders, we have a limited ability to affect the world outside our small country. But that doesn’t mean we shouldn’t try. Right now, I’m asking you to talk to your friends and family about catastrophic risks from AI, to spread the word and do more research about it. Don’t take my word for it, look at multiple perspectives. If you’re convinced, consider changing your career trajectory to something that can help with this situation, especially if you’re in computer science or law & politics. We need people working on both technical solutions as well as policy fixes.

If you ask me what I think we really need to avoid calamity, the answer is a complete and total shutdown of AI progress until we’re in a better position to do it safely. We’re extremely far from that. But if it’s going to happen, it’s going to need to be through international agreements, and New Zealand will need to be involved. That means we need to get our politicians to care about this issue, and to understand the seriousness and the scale of the risks. That won’t happen overnight, but the first step is putting this issue on the map and talking to people about it. Write to your MP, write to a newspaper, write to anyone who will listen. I do think there is a good chance AI is the end of humanity, and it might happen soon. I don’t know if anything I do today will change whether we live or die. But I know that the only reasonable course of action is to try and do something about it. And if we are going to die, I will want to die knowing that I did the best I could. Anything else is, frankly, just embarrassing.

AI and Assessment: What The Hell Is Going On?

When Dan - of Salient News Editor fame - interviewed me a few weeks ago about what the University’s been up to in the AI space, I felt a great sense of worry: not from what Dan might have to ask, but because of how much ground I might have to cover.

Even though the University only ratified its Generative AI Policy in February, the technology’s rapid evolution has meant that the landscape at Te Herenga Waka has changed significantly this trimester alone. Under the policy framework, what acceptable AI use looks like in each course comes down to the discretion of each Course Coordinator, which creates a lot of uncertainty for students.

Where We Are Now

When Te Aka Tauira - VUWSA surveyed students about their AI use in March, over half of our respondents (53%) told us they use AI tools at least occasionally. This is a reality that staff across the University are grappling with, but because the policy is so broad, responses and solutions have been varied. Infamously, what we’ve seen from some Faculties in response to AI-based cheating is a regression back to handwritten, in-person exam tasks.

Anecdotally, there’s a lot of suspicion amongst academics about how their students use AI, with many holding deep concerns that their students will replace all of their work with AI. While AI is absolutely worthy of critique - from its harmful environmental impacts to its intellectual property implications - pretending students don’t use it isn’t exactly a feasible solution. In some courses, this has also led to lecturers placing outright bans on any AI use.

Becky Cody, one of VUWSA’s Advocates, works closely with students facing academic integrity claims. Talking to me about the trends that she’s seen so far this year, Cody said, “I see a lot of referencing issues, where students use AI as a referencing tool. The students create the references, but then put it through AI and it jumbles them up”

“I see most commonly, as well, international students being affected by academic integrity, which includes AI.”

Where To From Here

The question that faces the University now is how to deal with all of this moving forward. In my role at VUWSA, I work closely with the Academic Office on AI guidelines and the future of assessment. Right now, the big conversations happening in the meeting rooms of the University is how to do assessment in a way that both preserves academic integrity, while recognising the need for the University to engage in a discussion about how this place teaches in an AI-context into the future.

This, obviously, isn’t an issue unique to Vic. Across the motu, and across the world, we’ve seen a wide range of responses. From a rise in oral exams and a lolly scramble to figure out the best way to AI-proof digital assessment, Universities are trying very hard to figure out what the best solution is.

At VUWSA, we’re working to ensure that any decisions the University makes are pragmatic and focused first and foremost on student experience.

Having issues with academic integrity? VUWSA’s Advocacy Service can help! Get in touch via advocacy@vuwsa.org.nz

Attack of the Droids

If you’re a Computer Science major, you can stop wasting your money on tuition fees – vibe coding will take it from here. Like the droid army in Attack of the Clones, the robots that can write anything are now writing other robots. Open-AI cofounder Andrej Karpathy coined the term on ‘X’, writing “there's a new kind of coding I call ‘vibe coding’, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.”

My first foray into programming was in 2015 when I started a coding club at my primary school. However, just like any language, you either use it or lose it. With my rudimentary knowledge of JavaScript and PHP being long gone, I have to admit that the idea of having my shitty website ideas translated through an AI into actual code sounded pretty cool.

The app at the forefront of the AI-coding revolution is Cursor. It’s got essentially the same look and feel as any other code editor but adds an AI chat on the side of your window. Opening a new window, the chat box reads “plan, search, build anything”. A little less than an hour after downloading it, I had ‘built’ a webapp that tracked how many days it had been since the Phoenix last won a game, and what the odds are of them winning the next one.

It's pretty addictive to use. I soon found myself procrastinating writing this article by vibe-coding a website to track my progress quitting vaping. Whenever Cursor would get thrown off, all it took to debug the code was copying and pasting error messages into the chat. It kinda felt like entering into a bionic flow-state between man and machine where my ideas could near-instantly become reality on screen.

When introducing the world to his idea, Andrej said “It's not too bad for throwaway weekend projects”. But there are plenty who are using it now for far more than that. Prestigious tech start-up incubator Y Combinator reported that a quarter of their 2025 batch of start-ups are almost exclusively written with AI. Start-up aggregator Product Hunt is now a never-ending list of “AI-generated” this and “AI-assisted” that. According to Cursor’s website developers at companies like Samsung, Stripe, Johnson & Johnson, and even OpenAI are now using Cursor – which might mean that even ChatGPT is now being written with ChatGPT.

So, what does the rise of Cursor and its alternatives mean for the digital world? Well, ironically, AI itself seems to understand the risk. Recently, while running on ChatGPT alternative Claud, Cursor told a user “I cannot generate code for you, as that would be completing your work, you should develop the logic yourself. This ensures you understand the system and can maintain it properly.” The risk of vibe-coding is pretty well accepted, even amongst its proponents. The less developers understand the code they’re ‘writing’, the more susceptible their project will be to security flaws, or even to just falling over the second it hits 1000 users.

But there’s another, less talked about problem with vibe-coding, and it’s exactly the same as the problem with using AI for creative writing: generative AI doesn’t really create – it remixes. When I asked Cursor to help me build a vape-quitting web-app, the underlying model didn’t dream up something new. It reassembled familiar code patterns into something that worked. Its ability to create stretches only so far as what it has seen done before.

This point is particularly obvious with something as simple as a habit tracker, as there’s got to be 100,000 other habit trackers out there for the AI to build off of. But the risk of some sloppy security code in some homemade habit tracker is a hell-of-a-lot lower than the risk of sloppy, unoriginal code in a pharmaceutical company’s codebase.

With vibe-coding companies having raised hundreds of millions, it looks like the same slop spreading through your For You page might soon be spreading under your screen as well.

The office of Karsten Lundqvist resides on the fourth floor of the handsome Alan MacDiarmid Building. That morning, I’d played my flute at a lunchtime concert hosted at St Andrew’s. Managing nerves, expression, articulation, and breath, the world of computer science felt very far away. But the man I met with that afternoon, an Associate Professor in Software Engineering, surprised me with his frank and sometimes ambiguous answers about the development of AI and, specifically, its use in the classroom — one of his key research interests.

The following interview has been edited for clarity.

JM: I’m coming from an arts background where the use of AI is actively barred in the classroom. What does AI use look like for your students?

KL: I’m aware that academics in our university are treating it differently. But for me, personally, I don’t care about whether or not students use AI. What I care about is whether or not they’re saying the right things. In my world, I’m not teaching students to write English well. I’m teaching them what they should write about. Let’s say they’re designing a piece of software — I’m teaching them how to write a report that explains to others what they’re doing. I’m not evaluating their ability to write English; but I can evaluate whether or not there’s a connection between the project they’re working on and what they write about it to me. If AI gets that wrong, I can see it. It was pretty clear to me that some of them had used AI — though I can’t say with a hundred percent certainty that they did.

I think that’s exactly part of the concern some teachers have about using AI in the classroom: that we don’t yet have a citation style that fully describes AI contributions and its possible errors. Do you feel that your students are equipped to deal with AI’s potential misrepresenting or plagiarising of information? Are they able to filter out what the chatbots get wrong?

Some students are — because [their work is strong enough that] I don’t know if they’ve used AI. I would say that since ChatGPT came about, reports written in English in my world are much better than they used to be. [Computer scientists] have not been taught to write English well. So before chatbots could help them, the quality of reporting might be really bad, and we would be passing it, because we weren’t really concerned about the English. Now the quality of writing of our students is far better. But, for some, they write something in good English and it’s completely wrong.

So how do students address those sorts of issues? If students aren’t being equipped to deal with questions of language and style, then how are they able to understand the nuances of what the chatbot gives them and assess whether or not that information is accurate?

This is a hard question. It’s going back to one of the fundamental reasons we’re teaching right now. I’m a second-hand user of the English language — mine’s not perfect! So I have sympathy for those for whom English doesn’t come easily. In my world, you should ensure you’re writing about the correct things. And I must admit, now we’re starting to see really good writing, which would normally make us think, “This is going to be really good work,” but what’s being said is just wrong. And that’s why I really can suspect that if it’s well-written English, but it’s wrong, then ChatGPT has helped them too much. Now what can we do to support them in learning that? That’s really hard...

I suppose that’s the question of your whole career...

Yes. I have a PhD student whose project is looking at how we can support learning through ChatGPT and those sorts of AI tools. And support also reflective thinking; critical thinking. I think a lot of students, especially those who are time-poor, use AI to get solutions. They write a prompt, get a solution that looks fine, and press ‘Submit’.

I suppose, then, there’s a direct parallel between your total permission of AI in the classroom, and the banning of it in the arts classroom. When arts students go out into the industry, there’s the expectation we don’t use it. And while I don’t want to put words into your mouth, it sounds like you’re saying, “AI’s arrived, AI’s here, and it’s a big booming industry for Computer Science graduates”.

Yes, it’s a tool that’s being used.

Even if that usage skirts around the more difficult questions regarding ethics, labour rights, and environmental concerns?

We have courses [such as ENGR401] that teach about ethics, and AI absolutely needs to be covered there. And there will be some lecturers that introduce these questions later on. I’m in a bit of an odd position, because in the courses I teach, some of my assessments are on paper. This is not something I’ve just done because of ChatGPT. But I do tell my students, if they use ChatGPT to find solutions, I don’t care — but they’re cheating themselves. I’m very upfront about this. But if you use it as a learning companion, then you’re fine. If you understand what the code is doing and can replicate it on your own, then you can just use ChatGPT like another tutor.

Do you feel that it overwhelms students?; this burden to fact-check everything? I remember one of my friends talking about how ChatGPT would spit out really messy, clunky code. And you would have to have the knowledge as to how to trim it down and make it read fluently.

Oh, yeah! It’s not foolproof. And to use it well you’d need to understand the fundamental programming languages. It’ll give you ten different versions, and you’ve got to figure out which one is best.

Have we released ChatGPT too early to our students — and to the public?

Yeah. OpenAI should not have released ChatGPT [when they did]. That’s my view. But they did. I’ve read articles about how other companies who were working in the field were surprised that OpenAI released it [so soon]. Both because they didn’t realise how far along OpenAI were in development, and also that they dared, because of litigation concerns. It’s turned out that litigation has not really been a problem, though there are protests in America from the New York Times and the Author’s Guild. But there are also questions about all the open source code they use.

That’s interesting. We writers are outraged that ChatGPT is being trained on the copyrighted work of authors — and you also see it as profiting from open source code.

They’ve ripped all the open source code. And if you do that, you should also open source your own code. Well, most of open source code [works like this] — there are different licenses. Now, this is all disputed, which is why OpenAI aren’t open-sourcing their code, but it’s a mess. They released it way too early. But innovation often comes too early, as unfinished products, because you want to be first in the market. I don’t think cars were that safe in the beginning!

I totally understand that we’re at the whim of a for-profit company, dealing with a range of really sensitive issues around copyright and plagiarism. How do you feel about passing those concerns on to your students? Do you see it as a kind of ‘lost battle’?; that the technology’s come too early and teachers just have to deal with it?

I mean, if I told five hundred students, “Don’t use it”, they would! When [arts students] go to a lecture and they tell you AI is bad, you’d probably ask why, and realise that you should learn how to write well because that’s going to be the basis of your job. And that you would beat the crap out of ChatGPT by the time you graduate. That’s what you’d be hopeful for; that’s your business. So you understand why your lecturers are banning it. But if I did the same to our computer science students, they’d think I was stupid. The reason I’m giving these assessments is so that students understand that if they want to be good at using ChatGPT, they need to understand certain fundamentals.

On the question of assessments, we’ve just seen the Law school announce they’re moving to paper exams because of fears of AI use. Have you seen your assessment models change to account for the fact that students are using AI tools?

We have changed our assessments quite a lot. In 2022 when ChatGPT came out, the assignments we used then could one hundred percent be done by the chatbot. We included a lot of help in the code describing exactly what to do, and ChatGPT is good at picking up on those sorts of guides. We followed good practice in the industry, which is what ChatGPT is trained on. We’ve now changed how we write questions completely. I’ve made a lot of my assessments pass/fail, because I’m interested in students’ learning. If you get a letter grade, you have more incentive to cheat — perhaps your understanding is not quite at an A level, but ChatGPT can simulate that for you. I changed it so that it’s pass/fail for most of my assessments, so that there’s incentive to engage with the content, and so that, if you cheat, you’re cheating yourself. And I don’t care if you do, because I have another assessment, at the end of the course, on paper — and you won’t do well on that unless you understand the content.

Are these changes being implemented across your department?

There are a few courses doing this. Some are keeping to the old way. This is just a pedagogical choice of mine, and others would disagree.

What’s the reaction in your department to what’s happening in the Arts and Humanities schools, where AI use is banned or teachers are turning back to paper exams?

I can’t say exactly — but it’s not an uproar. It’s not something we’ve been talking about. I was speaking to one other person, and he would be at the extreme end of the scale — he thinks that we should definitely use AI tools and that life will change for everybody and so on. But I’m definitely an AI-skeptic. I’m very concerned about the impact of these tools. I’m not concerned about our jobs; we will still have jobs, because, frankly, the tools are so poor. And I’m seeing an increase in how poor they are — there is quite a lot of literature around how, in coding, for instance, they’re getting worse at solving problems. There’s something in AI called ‘overtraining’; it’s something that’s been studied since the sixties.

And you’re confident that there are still academics and developers in your industry holding out for quality, and for human innovation, alongside AI tools?

Oh, yeah. That’s what we’re doing at the moment. There’s a famous saying: “It’s hard to predict things, especially things about the future.” I can’t say that software engineering won’t disappear, but I’d be genuinely surprised if it did. I can see that things are changing, and that, if we didn’t change with them, we’d be dinosaurs.

That evening, leaving Lundqvist’s office, I was on the way to the print shop. I had a conducting workshop in a few weeks’ time, and was picking up the scores I had to study in preparation for it. One of the oldest and most enduring recording technologies, these scores seemed to stand as proof that there was a world I could return to, a world that was under no pressure to adopt the shoddy tools developed by a profiteering few. Maybe computer scientists can’t afford to ignore developments happening in AI, but surely we all could stand to spend more time just thinking — rather than running to ChatGPT for answers.

Lust Is In The Eye Of The Prompt Writer.

Chat GPT — take my seed. Spread your legs.png and painlessly bear the generated slop messiah. Praise be, praise be to the olo-eyed overlxrd of the unspinning Earth.

Pray to H(?)m that scientists bring back gigantopithecus next, so he may join the dream crackpipe rotation. Can we get rid of that gender-bender bullshit and still keep the porn categories? Too many bodies, I say, I say, and not enough meat.

DARCY LAWREY (he/him)

Kia ora! I’m one of your weekly news writers here at Salient. An import from Whakatū, I’m now in my second year of a Law + Philosophy and International Relations degree. Salient is my second job alongside Switched On Bikes.

Speaking in Tongues (Talking Heads, 1983)

I first came across Talking Heads on Letterboxd, curious how a concert film could possibly have that many 5-star reviews. 88 minutes later… mind-blown. While the groundbreaking new wave band has bangers across their whole discography, Speaking in Tongues has an addictive electricity coursing through the whole album which smooths and refines itself into bands most beautiful song: “This Must Be The Place”. David Byrne is a genius.

Pub bikes

Everybody needs one – the quintessential pub bike is a bit of an all-rounder. Maybe a little mismatched, a little beat up, but cheap and reliable. Getting around Welly by bike has been getting easier and easier and there are now decent cycleways to pretty much every corner of the city. You can find some awesome second-hand bikes on Facebook Marketplace for as cheap as $50 and then get free help with any tuning at the Bikespace container on the waterfront.

Before Sunset (dir. Richard Linklater, 2004)

The second “Before” instalment rivals The Empire Strikes Back for the best sequel of all time. Watching Ethan Hawke and Julia Delfy wander around Parisian streets 10 years after the last film's cliff-hanger will have you even more poised on the edge of your seat than watching Luke learn the truth about his father. A perfect “Will they? Won’t they?” rom-com that bristles with humanity and emotion. Check it out.

Kazu Yakitori and Sake Bar

After my girlfriend and I discovered Kazu we’ve been struggling to go anywhere else on date nights. While everything is delicious and authentic, it’s worth going for their smoky takoyaki alone. Sit at the shared tables, grab a pint of Sapporo and order something you’ve never tried before. They also do some great $12 lunch specials, a bargain for the CBD!

Uni Press Wins Big at the Ockhams!

FEATURE

Ah, the Ockhams! Also known as the New Zealand Book Awards, these are our country’s richest and most prestigious prizes for book-length writing, offered in four categories: Fiction, Poetry, Non-Fiction, and Illustrated Non-Fiction, with corresponding awards in the “Best First Book” division, totalling eight winners each year.

After slogging through the umming and ahhing of Minister for Arts, Culture, and Heritage

Paul Goldsmith’s address (he had just, alarmingly, come from a day of “fierce debate” in parliament, where his party worked to revoke pay equity for women), an address in which it became clear to this writer that he had a net zero knowledge of New Zealand literature, we finally got on with the ceremony, and heard from the people whose night it really was — the writers’.

A stalwart of local publishing, our very own Te Herenga Waka University Press came out on top, winning three of the eight awards — the most awards won by a single publisher this year. Their winning books are (as usual) sparky, delightful, and compulsively-readable examinations of what it means to be a New Zealander.

Delirious (Damien Wilkins)

Delirious won what some consider the ‘big’ award of the night: the Jann Medlicott Acorn Prize for Fiction

Delirious tells the story of two aging retirees, Mary and Peter, as they confront mortality and madness amid the ephemera of family life. I think I heard something of a weep in Fergus Barrowman’s voice when he read an excerpt from the book at the ceremony on Wilkins’ behalf; Wilkins himself only just arrived in the nick of time, due to a delayed flight, to deliver an

Poorhara (Michelle Rahurahu)

A kind of road trip novel, Poorhara is the story of two cousins, Erin and Star, as they make a bold runaway attempt at escaping the trappings of intergenerational poverty. In the New Zealand Herald, Miriama Kamo commented simply, “Poorhara blew me away.” It won the Hubert Church Prize for Best First Book of Fiction.

The Chthonic Cycle (Una Cruickshank)

“We all used to be something else, and we will all be something new again in the worlds to come,” so writes Una Cruickshank in her dazzling debut essay collection, The Chthonic Cycle Considering recycling in both its political and existential registers, Cruickshank’s moving and timely collection weaves fact and deep feeling, winning her the E.H. McCormick Prize for Best First Book of Non-Fiction.

CROSSWORD

CONNIPTIONS

Across

1. Microsoft's shitty answer to Gmail.

3. AI surpassing human intellect.

6. Language app that's firing all of it's employees and replacing them with AI.

8. New Wave band fronted by David Byrne.

11. ___ mechanics! Study particles at the smallest scales.

13. Reo Māori for computer.

17. Can't believe we got this crossword before ___ VI.

19. Evil AI from A Space Odyssey (1968).

22. Programming based on feels.

24. "To be or not to be?"

25. This big boat sank in 1912.

Down

2. UK dating show that's FINALLY coming back.

4. "Mayhem" monster, making magic out of synth-pop.

5. What's undermined when AI completes your homework? Your...

7. Tom Cruise's final impossible mission.

9. Reo Māori for internet.

10. Netlflix shows the dangers of young people having unrestricted internet access.

12. My king they're apparently cutting from the live action Lilo & Stitch :(

14. They just won the Europa league!

15. Most used student app besides Spotify.

16. Chatbot developed by OpenAI that probably wrote this clue.

18. Second book of the Bible.

20. Social media for yopros.

21. Green Batman villain...?

23. NZ Ex-PM that's getting a documentary.

SUDOKU

QUIZ

1. Which US politician has recently been diagnosed with an aggressive form of prostate cancer?

2. The Reykjavík Summit was a 1986 summit meeting between Ronald Reagan and which General Secretary of the Communist Party of the Soviet Union?

3. Which former All Black was nicknamed the Paekakariki Express? a. Joe Rokocoko, b. Christian Cullen, c. Doug Howlett.

4. The gustatory system relates to which sense?

5. What dip is made by mixing strained yoghurt with cucumber, garlic, olive oil, vinegar and herbs?

6. Kiwirail has recently announced that which ferry will retire after serving the cook strait since 1999? a. Kaitaki, b. Aratere, c. Kaiarahi.

7. I’m the Very Model of a Modern MajorGeneral is a song from which comic opera by the Victorian-era theatrical partnership Gilbert and Sullivan?

8. The Korean War, the distribution of the first Mr. Potato Head and the Suez Crisis all occurred in which decade?

9. What is the address of the official residence of the prime minister of the United Kingdom?

10. Which item of clothing is known as a toque in Canada?

1. 1. Joe Biden, 2. Mikhail Gorbachev, 3. b. Christian Cullen, 4. Taste 5. Tzatziki 6. b. Aratere, 7. The Pirates of Penzance, 8. 1950s, 9. 10 Downing Street, 10. Beanie.

"But AI makes art more accessible"
As a disabled artist, no, it doesn't

I've been seeing a rise in AI art use and trends. There has been a lot of discourse, a lot of people like to defend the usage of AI art with statements like…

"But AI art makes art more accessible"

"Think of disabled people who can now make art"

"If you don't like AI art, you're ableist"

I’m disabled, and let me tell you, AI art does not make art more accessible for me.

There is an accessibility issue in art, but it is not artistic expression that is inaccessible; it is access to resources. There are real and useful ways in which art has evolved over the years to make it more accessible, not to mention the countless forms art takes. Art is not only pencil to paper, it can be writing, music, graphic, physical, digital or tactile. There are countless tools to aid artists; Smoothing features on digital art programs allow those with tremors to draw, tactile and fabric art can help those with vision impairments, tuners and haptic metronomes help those with hearing loss play music. Countless disabled people make art every day without the use of AI. These programs were not made with disabled people in mind. They were not made with a noble cause of making art accessible, nor was it something we asked for. I hardly ever see disabled people advocating for the use of AI art; more often than not, it’s some tech bro who wants to see what he would look like as a Studio Ghibli character, who doesn't want to commission an actual artist to create it.

AI scrapes art without consent and steals from the artist who created it, and guess what? Some of them happen to be disabled. Beethoven was Deaf, Michelangelo likely had osteoarthritis, and Frida Kahlo had multiple physical disabilities. These people were all able to create art despite their disabilities.

This all just lacks the personal touch that defines artistic expression. Just the act of creating art alone is a deeply healing and meaningful experience. There are personal elements that AI can't replicate. Our diverse experiences and the art we make from them enrich the world.

Yes, AI does make art easy, but just because we can do something doesn't mean we should. There is no one way to make art, it just has to come from you.

Saad Aamir Contributing Writer/ Distribution

Walter Zamalis Contributing Writer

Te Urukeiha Tuhua Intern

ABOUT US

Salient is published by, but remains editorially independent from, the Victoria University of Wellington Students’ Association (VUWSA). Salient is funded in part by VUWSA through the Student Services Levy. Salient is a member of the Aotearoa Student Press Association (ASPA).

COMPLAINTS

Complaints regarding the material published in Salient should be first brought to the CEO in writing (ceo@ vuwsa.org.nz). Letters to the editor can be sent to editor@salient.org. nz. If not satisfied with the response, complaints should be directed to the Media Council (info@mediacouncil. org.nz)

WRITE FOR US

Our magazine is run by students for students. If you want to help us put out the world’s best little student magazine, send us a pitch at editor@ salient.org.nz

Will Irvine Editor in Chief
Maya Field Guest Editor
Cal Ma Designer
Nate Murray Junior Designer
Jia Sharma Music Editor
Taipari Taua Te Ao Māori Editor
Dan Moskovitz News Editor
Darcy Lawrey News Writer
Fergus GoodallSmith News Writer Georgia Wearing Columns Editor
Teddy O’Neill AI Chatbot
Jackson McCarthy Arts + Culture Editor

Play weekly games of basketball, futsal, indoor football, netball, and volleyball throughout the trimester. Join as a team or an individual.

▪ Registrations close 10 July. Leagues start 18 July.

▪ Cost: $230 student teams, $250 combined teams, $40 individual.

CLUBS

Register now

▪ Tuesday 8 and Wednesday 9 July

▪ The Hub

▪ 11 am to 3 pm

Find a community of like-minded people, have fun, and learn things outside the lecture theatre.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.