Ethical & impactful usage of AI in August 2025

ā€œAIā€ is wildly hard to define, we’re primarily talking about Generative AI. In 2 month's it'll possibly be 100% different as it's moving so fast.

No items found.
Harvey (chairdog) why it's take Beck & Simon so long to get more treats.

Consciously navigating AI is extremely important as it creates significant risk.

AI is a powerful tool to boost creativity, efficiency, and problem-solving, but has proven to create significant impact on security, privacy & reduce quality of work for organisations big and small. This article shares our point of view for advocating for transparent, client-opt-in usage underpinned by human oversight and purpose-driven frameworks - ensuring AI’s power is harnessed responsibly for innovation that benefits people, business, and the planet.

Chapters:
No items found.
No items found.
Ethical & impactful usage of AI in 2025

Our team has had mixed feelings about AI over the last few years — from unbridled enthusiasm and awe to deep dread and ā€œWhat is my value now AI can do my job?ā€. Much of that has faded, dulled, to an extent, through extensive trials, tribulation and testing. Serious security, privacy, environmental, and intellectual property risks exist, and this article aims to dispel some myths and set a framework for when and how we use AI.

Key takeaways

  • AI can enhance creativity, efficiency, and problem-solving when used thoughtfully.
  • "AI" is wildly hard to define, we’re primarily talking about Generative AI. In 2 month's it'll possibly be 100% different as it's moving so fast
  • Transparent, client-opt-in adoption builds trust and safeguards values.
  • Human oversight is essential to maintain quality, ethics, and accountability.
  • Alongside environmental and data privacy risks, we also pay close attention to fairness and bias.
  • AIs are impressive, but without careful interrogation, can trick you very quickly down a path that wastes time.
  • When you become a willing participant, you have choices about what you input and what you do with the output.
  • To help us and our clients navigate the space we’ve summarised the modes of use in our AI Usage Policy.
  • Balancing innovation with environmental and social considerations ensures long-term benefits for people, business, and the planet.

"AI" is wildly hard to define, we’re primarily talking about Generative AI

AI is one of the loosest terms going around, how it's utilised is all over the place, and its definitions includes 16 pages of terminology from the ISO / IEC Standard.

In many instances companies referring to AI are just talking about predefined logic, (i.e. if this then that, but much more complex), whereas AI ultimately learns by itself from once off (data fed into a model) or iteratively through feedback. Within AI there is machine learning (a wide array of technical techniques), deep learning (a sub-set of machine learning aiming to mimic human brain learning process) and Generative AI (where deep/machine learning is applied to create outputs).

Source: ICAEW

Basic terminology you may hear about

  • Hierarchy: AI, Machine Learning, Deep Learning & Generative AI
  • Learning methods: Supervised; Unsupervised learning
  • Function & Process: Forecasting & regression; Classification; Clustering; Natural Language Processing; Image recognition & computer vision; Recommendation systems

Our friends Portable do an excellent job in their comprehensive "Policy for teams using AI" report, or if you want to drive deeper check out ISO Standards Terminology / Definitions or OECD’s Framework for Classification of AI.

Our quest for meaningful positive impact

We purely exist to ā€œUnlock the potential of the new economy to leave behind a better worldā€ leading to daily questions around how we can be more productive, more valuable, more impactful, and more efficient. A better world means doing things with integrity, in our approach, the work we deliver, how we position our brands but the tools we use.

Do the tools smash the planet with carbon [Hi, Amazon]? Are they sucking up data and manipulating mass populations [Looking at you Meta/Google]? Are they generating profits that are fueling war [C’mon Spotify]? Let alone the basic fact that AIs learnt from (read: stole) intellectual/creative property of most people on the planet, and generates outputs that would normally be considered IP/copyright theft.

People and the planet are physically impacted by AI

There is a litany of evidence that AI — from hardware to data centres, and from model training to usage — can cause tangible physical harm to people and planet. Studies estimate that even small actions, like drafting an email via AI, can consume several litres of water, with total AI water usage predicted to reach trillions of litres per year in the near future. COā‚‚ emissions are projected by some researchers to increase measurably in the US purely from AI growth. One data centre can use as much water as 100,000 homes

But we’re not idealists. Most of these tools are the best, by an extreme margin, at delivering outcomes. So a deep internal conflict arises and there’s no clear line between productivity and ethics. Especially when the impact of our work can have a direct positive impact on systems [RIAA], planet [ATEC] and individuals [CPSN].

A trade off decision

For example we conduct detailed market research through in-depth 60-80 question surveys. Through detailed analysis we can identify how our clients can improve, what they should focus on, which groups of people are most interested or those that need convincing. It’s very informative and valuable. The most effective way to collect diverse responses by 10-20X is Facebook, specifically their in-feed ads, old school, but it works. The data quality is high, the people are real, and we’ve tried so many other ways to get these responses and it’s just unviable. A real world trade-off we wrestle with daily.

It comes down to a case-by-case subjective, but educated decision, and we’ll be wrong.

AI Bias & Fairness

Alongside environmental and data privacy risks, we also pay close attention to fairness and bias. AI systems trained on skewed datasets can unintentionally reinforce stereotypes or under-serve marginalised groups. This is especially important in areas like recruitment, content guidance, or client-facing outputs. That’s why we maintain human oversight, diversity of input, and context awareness as essential safeguards.

Trials and tribulations

Now, when it comes to AI, the question of productivity, it’s extremely unclear, varies day by day, task by task or maybe the position of Saturn in relation to how well our tomatoes are growing.

We’ve carefully incorporated AI into various services we provide, tried it extensively as individuals for personal support, and listened to the honest voices in the space - we’re trying to cut through the hype.

From summarising a meeting, to turning a bunch of notes and thoughts into a coherent email, drafting messaging or brand strategy, reviewing code, analysing data, designing logos, making social tiles, generating websites, educating us on a vast array of topics.

Its first or second attempt at a well crafted prompt can be mind blowing, but then cracks quickly appear. You click the first 5 specific references and 4 of the links don’t exist, or the content of the article doesn’t cover the topic at all or worse specifically contradicts the AI’s output.

The illusion of productivity

AIs are impressive, but without careful interrogation, can trick you very quickly down a path that wastes time. I’ve personally been playing with it in a little app, to learn a new programming language and tools, and see how well AI’s work. I’ve used the premium models, paid plans, local models, giving lots of context & prompts, used best practice prompt templates and the like, and basically no matter what you do this can be the feeling of using AI.

It quickly feels like you’re almost there, but then somehow the progress slips through your hands.

I’ve personally lost hours and hours of time going in logic loops, creating new bad code, having to start again, unpick things. It’s real. (Note: I’ve found the right approach that is working well, but it’s not ā€˜press a button and magic happens’. This article isn’t about how to use AI well.)

@forrestbrazeal

Here’s an example of a basic search in our CRM and many companies listed don’t exist in our account, never heard of them, one is our amazing accountant (not client!).

A crude ranking of risk and value for some tools we’ve trialed

We are talking theoretical risk and actual value from our experience

Low Risk / High Value
Claude Code
Medium Risk / High Value
High Risk / High Value
Low risk / medium value
Asana
Mural
Medium risk / medium value
Slack
Windsurf / Cursor
Github
High risk / medium value
Google Workspace
Sentry
Low risk / Low value
Canva
Adobe
Figma
Medium risk / low value
Microsoft Clarity
High risk / Low value
Zoom
Hubspot

It can be thorny too.

Beyond productivity it’s not all roses - we (humans including the AI engineers) don’t know how these models actually work. Massive leaks through AI from Samsung, Microsoft, Zillow, McDonald's estimated at over $1B in losses and hundreds millions of individuals personal information shared. [TechCrunch]

"It details over $1 billion in documented losses from AI-related breaches, including Samsung's semiconductor code leaks, Microsoft's 38TB data exposure, and Zillow's $500M algorithmic failure. The report covers legal precedents establishing company liability for AI outputs, vulnerability rates in AI-generated code, and the McDonald's recruitment chatbot breach affecting 64 million applicants." [Wiz Blog, insideAI News, SecurityWeek]

Without labouring the point, what you put into AI can be accessed by others, in theory. This also applies to lots of technology, which we’ve seen with major breaches in federal government, major insurers, telcos and airlines.

The risk isn’t new, it’s just billions of individuals are now willingly sharing deep dark secrets of their own or their company.

Beyond sensitive information, inaccurate responses, unchecked can directly hurt business. ā€œSecurity researchers consistently find that 30-50% of AI-generated code contains exploitable vulnerabilities.ā€ [arXiv]

In less direct ways, it could misinform you about your data, (I’ve seen it completely misinterpret and summarise data or market research), so you make decisions that aren’t well informed and send you down the garden path. (Not sure why garden metaphors came out in this section. I’m behind on weeding our garden, maybe that’s why?)

You are unwillingly participating.

The AI avalanche is overwhelming, it’s everywhere and every tech company is fighting for claims to have the most active users to skyrocket their share price, get more funding and realise the $4.8trillion opportunity.

It’s in Facebook Messenger. All over your phone. Listening & watching your meetings. Reading all your emails. Accessing all your documents. I’m not talking in theory, it’s literally penetrated almost everything..

I’d say over 50% of the meetings I host on Google Meets or Zoom, the first participant is some friendly AI app (Otter.ai or Zoom itself) that politely asks to join to summarise the meeting. Until now I thought ā€œOne of our clients must want to use that app to help them, so I’ll accept it.ā€ but we realised as a team that’s hugely unethical, AND when we ask nobody actually wants it there.

What’s more hilarious is when I have ā€˜tried’ them the notes were accurate and amazing, but useless and nobody used them. That’s because everyone was taking their own notes or just using their brains, and taking responsibility to be present and accountable. So in practical terms it didn’t help with productivity at all.

They are watching and recording your faces, conversations, screenshared documents, chat messages with links to important information. And we’re all unwillingly let it happen.

A more perverse version is happening in business software, they are automatically turned on, allowed to do as they please.

So we disabled them all by default

So we went through all of our apps and disabled them. Most notably Google Workspace (Docs, Gmail, and more), Slack, Hubspot & Zoom which house our most sensitive information and in all 4 instances the benefit is zero or extremely low.

Meetings: Zoom

ZOOM Webinar Instructions - IAOMS

ā€

Documents & Email: Gemini in Google Workspace

Code logging: Sentry

CRM: Hubspot

ā€

ā€

Source control: Github

Operating System: Apple Intelligence, Windows & Android

Interestingly we have Alexa & Google on our speakers at home and I know they’re listening, but for some reason have left them on, I am rethinking this right now.

Analytics Platform: Microsoft Clarity

ā€

Messaging App: Slack

Today, literally after writing this first draft, Slack (one of our favourite apps, but now owned by Salesforce, less favourite) turned on AI by default, with lots of lovely disclaimers. This was actually quite clear, whereas Google & Zoom were very difficult to find and unpick.

So we clicked ā€œManage AI permissionsā€

And disabled it by default. We’ll come back to review the options if it hits the right mark.

They have amazing (sarcasm) policies that clearly state they wont, will or maybe will, use your data. But beyond those ambiguities, they don't know how AI is using the data.

Become a willing participant

When you become a willing participant, you have choices about what you input and what you do with the output. For us, that includes our clients - as we act with their information, code, and marketing, we give them the option to direct how we participate. You can see exactly how this works in practice in our AI Usage Policy.

To help us and our clients navigate the space we’ve summarised the modes of use.

Productivity estimates are illustrative, and as outlined above, depend. This indicates when things go well, we do the right thing and they’re having a good day. Arguably they are infinitely productive because something the user couldn’t fathom, can now do it in minutes.

No AI Researcher Generator Assistant (Third-party) Assistant (Private)
Disable AI in all software being used for the project, ignore any AI that aims to inform work. Share very discrete, non-sensitive^ snippets or questions, and use the response as an input. Like a better search engine. Share more non-sensitive information, broader requests that generate actual outputs we can utilise / modify in our work. Share most information (not security credentials) and enable it to make edits to files, accepting / approve all outputs. Share most information (not security credentials) and enable it to make edits to files, accepting / approve all outputs.
Premium services that have a strong privacy & security position. Privately hosted ensuring no data leaves our environment.
Normal productivity Small increase in productivity & capability Moderate increase in productivity & capability Very high increase in productivity & capability Moderate to high increase in productivity & capability
Data with third party Data kept local or private cloud
No risk
Very low risk
Human error: sharing incorrect information or not verifying outputs.
Low risk
Human error: sharing incorrect information or not verifying outputs.
Moderate risk
We can't assure how data is stored / used beyond their policy.
Low risk
No data leaves our private environments (local or hosted).
Disable AI in your apps & tools Claude Teams
ChatGPT Teams
Perplexity
Claude Teams
ChatGPT Teams
Claude Code
Gemini CLI
Cursor/Windsurf
Claude
ChatGPT
Qwen 2.5 Coder 7B
DeepSeek Coder V2
Codestral 25.01
Code Llama

^Sensitive information: In itself this is ambiguous, and broad ranging from extremely sensitive system security keys and passwords to your semi-sensitive brand strategy. How much harm could come to your business if it was usable by the public or competitors?

*Personally identifiable information: Technically it’s anything (for example 2-3 pieces of information like DOB, address and name) that can be used to impersonate or conduct fraud, but we think of it as including health, financial, information. Learn more here

When we use various modes of AI.

Examples of work where do or don’t use AI

Strategy Content Design Data Programming
>Be cautious: High value / low risk
- Workshop notes, summaries
- Desktop research
- Ideation & synthesis
- Editing tone of voice
- Improving tone of voice
- Wireframing / exploration
- User experience / trends
- Generative fill / editing images
- Analytics models
- Survey question design best practice
- Benchmarking industry trends
- Python (coding assistant)
- Coding assistant
- Discrete / simple tasks
- Debugging
- Code reviews / test creation
- Exploring new languages / infrastructure / integrations
>Don't go there: Low value, high risk
- Internal intelligence & insight
- Recording / transcribing meetings
- Create brand new content
- Generating art/visuals
- Content performance data
- Concepting
- Brand design
- Website design
- Access to data
- Direct analysis
- Access to core systems
- Open briefs with end-to-end code creation
- Ability to push to repo / production
- Credentials & security

The level of utilisation of AI in types of work

As a general guide, it depends on the data / inputs we’re using and how we’re using the outputs.

Full conservative, cautious mode is ā€œNo AIā€ whereas more risky is ā€Assistant (Third-party)ā€.

Below is our default position and each client can choose to dial up or down.

No AI Researcher Generator Assistant (Third-party) Assistant (Private)
Strategy āœ“ āœ“ āœ“ āœ“ āœ“
Content āœ“ āœ“ āœ“ āœ“ āœ“
Design āœ“ āœ“ āœ— āœ— āœ—
Data āœ“ āœ“ āš ļø āœ“ āœ“
Programming āœ“ āœ“ āš ļø āœ“ āœ“

So it’s a crazy era we’re in. There are real risks of damage to your organisation, and your clients / customers. The impact & ethics of the AI supply chain is very questionable. We’re all auto-opting in.

Ultimately, we’ll keep using these tools where they add real value - but always with caution, client opt-in, and a human-in-the-loop mindset. For the exact modes, boundaries, and safeguards we follow, see our AI Usage Policy.

No items found.
No items found.
No items found.
  • Maintain B Corp score from 134.1 with workers included
🟢
  • We officially re-certified in November 2023, and are pleased to report we achieved the same score (to the decimal point). Wild! We shared our experience of recertification here.
  • Share templates, documents, insight into business for good
🟠
  • We haven't done this publicly, but when people have asked, we have shared. And we're sharing a series of things as part of this impact report.
  • Maintain current ownership and governance
🟢
  • Harvey is 100% owned by the Smallchua Family Trust. Rebecca Smallchua is our sole Director.
  • Re-use, recycle and manage dangerous waste
🟢
  • We continue to implement our hazardous waste policy and are on a continuous learning and improvement journey.
  • We repair damaged hardware and minimise purchasing of new equipment.
  • Personally we're all Facebook Marketplace fans.
  • Be climate positive at work and at home
🟠
  • We don't track our CO2 emissions, rather we take a much more general and high emissions view. However, this year, we didn't donate to the environment (see above) so we can't say we countered our CO2.
  • Advocate for climate change / inspire sustainable living
🟢
  • We hosted a panel event on Zero Emissions Day in September 2023, along with our friends at Portable, where we interviewed industry experts on the opportunity to engage with community and work towards a more sustainable future. Recording here.
  • Donate 5% to the environment
šŸ”“
  • We didn't make the donation this year as we're revisiting our impact giving model - more details here
  • Invest $20k in impact businesses plus $20k of pro bono time
šŸ”“
  • We delivered some pro bono time but dropped the ball and had no official measurements in place.
  • We also did not invest $20k in impact businesses, and are reviewing this goal going forward. In the last 12 months, our three Impact Investments all lost their value (Whole Kids, Pronto Bottle & Kester Black). While it's not great, we accept this is part of ambitious investing, and each had their own challenges that they couldn't quite overcome.
  • Buy with intention from local and discriminated groups
🟢
  • We continue to be intentional about our suppliers as outlined in our policy and report the details in the Community chapter of our report.
  • Protest and boycott important issues (Australia Day, Melbourne Cup)
🟢
  • Yes and yes!
  • 9 day fortnights, with option for 4 day weeks
🟢
  • 80% work 9 day fortnights, 40% part-time hours, 10% standard working hours.
  • Improve and increase capability across team
🟢
  • Raising our emotional health levels through a leadership development program with Global Leadership Foundation.
  • Expanding output skills: Market research, Web design, content & copywriting, strategy & development and automation strategy.
  • Targeted and clear personal growth, if we are better our clients will be
🟢
  • A new process for 360 feedback, plus personal goal setting questionnaires that ask the big questions of where we want to go and how we'll get there. Also lots of accountability check-ins.

Client survey metrics

  • 3 / 5 value for money (1 - 'could charge less' and 5 - 'could charge more')
  • 8 / 10 likely to recommend
🟢
  • 3.4 / 5 value for money
  • 9.2 / 10 likely to recommend

No destructive clients. Revenue breakdown: 17% Good, 59% Great, 24% Amazing

🟠
  • No destructive clients.
  • Revenue breakdown: 17% Good, 59% Great, 24% Amazing (A little over on Good and under on Great, but on target for Amazing - which is most important, so we're happy with that)
  • All staff spend 80%+ of their time on clients
šŸ”“
  • Spent 64% of our time on clients (under). Due to team changes (recruitment, onboarding and offboarding) and extra investment in training, personal development and community engagement (e.g. B Local), we did not hit this target. On reflection, we will think 80% is too ambitious and we'll be revising to 70% going forward.
  • Regular, honest check-ins about how we feel
🟢
  • Stand ups, development sessions, watercooler chats, impact updates and more.

$994k revenue (Up $211k on FY2223)

šŸ”“
  • $833,588. Revenue was up 6% YoY. Midway through the year, we adjusted down our target to $879k as team growth / services shifted. The main reasons we didn't hit target were scope creep and overruns, both of which we're trying to manage better with process improvements.
  • Maintain B Corp score from 134.1 with workers included
🟢
  • We applied for our B Corp re-certification at the end of this financial year and are pleased to report we achieved the same score (to the decimal point). Wild!
  • Share templates, documents, insight into business for good
🟠
  • We haven't actively done this publicly, but when people have asked, we have shared. And we're sharing a series of things as part of this impact report.
  • Maintain current ownership and governance
🟢
  • Harvey is 100% owned by the Smallchua Family Trust and Rebecca Smallchua is our sole Director.
  • Re-use, recycle and manage dangerous waste
🟢
  • We continue to implement our hazardous waste policy and are on a continuous learning and improvement journey.
  • We repair damaged hardware and minimise purchasing of new equipment.
  • Personally we're all Facebook Marketplace fans.
  • Donate 5% to the environment
šŸ”“
  • We fell short here, we didn't make the donation. More details here.
  • Advocate for climate change / inspire sustainable living
🟢
  • Be climate positive at work and at home
🟠
  • We don't track our CO2 emissions, rather we take a much more general and high emissions view. However, this year, we didn't donate to the environment (see above) so we can't say we countered our CO2.
  • Protest and boycott important issues (Australia Day, Melbourne Cup)
🟢
  • Have a RAP, engaged stakeholders and implemented more change
šŸ”“
  • Due to competing priorities and limited time (no lack in desire) we de-prioritised our Reconciliation Action Plan as we want to do it meaningfully and have the capacity to follow through. However, we took a few first steps outlined here.
  • Buy with intention from local and discriminated groups
🟢
  • We continue to be intentional about our suppliers as outlined in our policy and report the details in the Community chapter of our report. We took it one step further this year with a public call to pledge to audit suppliers in this campaign www.supplier-impact.com
  • Invest $20k in impact businesses plus $20k of 100% pro bono time
🟠
  • We delivered some pro bono time but dropped the ball and had no official measurements in place. We also did not invest $20k in impact businesses because of the reduced revenue with Becky on maternity leave.
  • Sarah personally donated her photography equipment valued at around $7,500 to empower a content and brand producer in the Solomon Islands.
  • 9 day fortnights, with option for 4 day weeks
🟠
  • 40% work 9 day fortnights, 40% part-time hours, 20% standard working hours.
  • Improve and increase capability across team
🟢
  • Elevated our tool nerd level. See here.
  • Expanding output skills: Market research, Web design, strategy & development, video editing, and automation strategy.
  • Targeted and clear personal growth, if we are better our clients will be
🟢
  • Lots of on-the-tools growth, structured learning through weekly Lunch 'n Learns and Intro to Programming at RMIT.

No destructive clients. Revenue breakdown: 15% Good, 60% Great, 25% Amazing (Here's what the classifications mean)

🟢
  • No destructive clients.
  • Revenue breakdown: 10% Good, 66% Great, 25% Amazing
  • All staff spend 70%+ of their time on clients
🟢
  • Spent 71% of our time on clients (over by only 76 hours).

Client survey metrics

  • 3 /5 value for money
  • 8 / 10 likely to recommend
🟢
  • 3.4 / 5 value for money
  • 8.8 / 10 likely to recommend

Maintain current revenue

🟠
  • Revenue down 16% YoY
  • Regular, honest check-ins about how we feel
🟢
  • Stand ups, development sessions, watercooler chats, impact updates and more.
No items found.
22 Bricks
ABCH
ATEC
Abundant Water
Anantaya Jewellery
B Lab ANZ
BZE
Bank Australia
CPSN
Certification O
Chaulk
Client Fabric
Clockwork Films
Common Ground
Compass Studio
Cyclion
Dog & Bone
Envirotecture
Evee
Farm My School
Fellten
Gewürzhaus
Global Leadership Foundation
Goodtel
Green Collar
Hagens Organics
Hey Doodle
Jasper Coffee
Jaunt
KOSI
KingPump
LVLY
Lee Christison
Lumen
MIIROKO
MK Local Foods
Marnie Hawson
Merry People
Nexa Advisory
No Lights No Lycra
North West Guadalcanal Association (NWGA)
OBG
One Small Step
Parliament of Victoria
Peninsula Hot Springs
Pixii
Portable
Possible
Prisma Legal
ReCo
Shadowboxer
Strongim Bisnis
Studio Schools Australia
THL Tourism Holdings Limited
Thankyou
The Next Economy
The Salvage Yard
The Sociable Weaver
Time
WIRE
Whole Kids
iDE
No items found.
22 Bricks
ABCH
ATEC
Abundant Water
Anantaya Jewellery
B Lab ANZ
BZE
Bank Australia
CPSN
Certification O
Chaulk
Client Fabric
Clockwork Films
Common Ground
Compass Studio
Cyclion
Dog & Bone
Envirotecture
Evee
Farm My School
Fellten
Gewürzhaus
Global Leadership Foundation
Goodtel
Green Collar
Hagens Organics
Hey Doodle
Jasper Coffee
Jaunt
KOSI
KingPump
LVLY
Lee Christison
Lumen
MIIROKO
MK Local Foods
Marnie Hawson
Merry People
Nexa Advisory
No Lights No Lycra
North West Guadalcanal Association (NWGA)
OBG
One Small Step
Parliament of Victoria
Peninsula Hot Springs
Pixii
Portable
Possible
Prisma Legal
ReCo
Shadowboxer
Strongim Bisnis
Studio Schools Australia
THL Tourism Holdings Limited
Thankyou
The Next Economy
The Salvage Yard
The Sociable Weaver
Time
WIRE
Whole Kids
iDE

No items found.
No items found.
22 Bricks
ABCH
ATEC
Abundant Water
Anantaya Jewellery
B Lab ANZ
BZE
Bank Australia
CPSN
Certification O
Chaulk
Client Fabric
Clockwork Films
Common Ground
Compass Studio
Cyclion
Dog & Bone
Envirotecture
Evee
Farm My School
Fellten
Gewürzhaus
Global Leadership Foundation
Goodtel
Green Collar
Hagens Organics
Hey Doodle
Jasper Coffee
Jaunt
KOSI
KingPump
LVLY
Lee Christison
Lumen
MIIROKO
MK Local Foods
Marnie Hawson
Merry People
Nexa Advisory
No Lights No Lycra
North West Guadalcanal Association (NWGA)
OBG
One Small Step
Parliament of Victoria
Peninsula Hot Springs
Pixii
Portable
Possible
Prisma Legal
ReCo
Shadowboxer
Strongim Bisnis
Studio Schools Australia
THL Tourism Holdings Limited
Thankyou
The Next Economy
The Salvage Yard
The Sociable Weaver
Time
WIRE
Whole Kids
iDE
Collaborative online whiteboard for visual brainstorming and teamwork
Productivity management tool for tasks and meetings.
ChatGPT is a generative artificial intelligence chatbot developed by OpenAI
Claude is a next generation AI assistant built by Anthropic.
WhyHive is a simple tool for data analysis.
Microsoft suite
Video calling platform for professional communications
Loom is a platform to record videos to share with your network
Beautiful, easy to manage websites built low-code
Collaborating with clients, partners and each other
Collaborative design, presentation, templates.
Highest quality data layer management for consistent tracking across all platforms.
The intuitive research tool to gather meaningful data from people
Content, design, dev and collaboration tool
Video editing, transcription
Digital commenting & feedback workflow tool
Email, docs, calendars.
Flexible data and workflow platform for content management, analytics, data management.
The benchmark in eCom customer data & marketing automation
Lean frontend toolkit to rapidly build beautiful interactive experiences.
The benchmark in eCom. Create curated, beautiful and responsive experiences - built rapidly and adapt easily.
Cross-platform user behaviour analytics
The benchmark in eCom review management, and it's built in Australia
Scalable CRM great for businesses of all sizes to manage sales, marketing, service and more.
Rapid data analysis on large scale, complex datasets
Podcast hosting and analytics platform