header image of a human with an old computer as their head doing work on a newer computer/laptop.

AI & the R&D Tax Incentive

A Practical Guide

How AI Development Engages with Australia’s R&D Tax Incentive

There aren’t many fields of business still untouched by artificial intelligence (AI) today. From agriculture to mining, healthcare to defence, and marketing to financial services, AI is transforming established practices by automating processes, generating content and improving decision-making. While AI as a research field has existed for decades, the public release of advanced foundation models such as ChatGPT in November 2022, along with other generative and agentic AI systems, has accelerated both the pace of development and its adoption across sectors. As these tools move from theory and research into real-world applications, Australian businesses are racing to build, integrate and adapt them—seeking competitive advantage in what has become a modern-day gold rush of technological innovation.

Contents

  1. 01

    Practical Guide

  2. 02

    Case Study 1: AI-Powered SaaS Tool for Hospital Operations

  3. 03

    Case Study 2: Development of an ML Model for Concrete Defect Detection in Motorsport Racetracks

  4. 04

    Why Specialist Expertise Matters When Claiming R&D

Purpose of This Guide

This guide is for Australian companies or foreign corporations working on AI projects in Australia who want to:

  • understand how the R&DTI applies specifically to AI development
  • correctly identify and document eligible core and supporting activities
  • avoid common errors in claim preparation, and
  • maximise the benefit of the incentive while remaining compliant with legislative requirements

Throughout this guide, we reference examples, case studies, and official guidance materials to provide context. We also draw on real-world scenarios to illustrate how AI companies can navigate the line between eligibility and ineligibility, especially when using off-the-shelf models, APIs, or datasets.

You’ll learn how these activities interact with the R&DTI framework, what qualifies, what does not, and how to frame your work to reflect the genuine R&D it may contain.

Readers should not rely on this guide as formal advice. It is intended for general information only and is not a substitute for financial, legal or tax advisory services. Advice you rely on to make important commercial, legal and tax decisions should always be tailored to your specific circumstance. Companies should review the relevant legislation, including Division 355 of the Income Tax Assessment Act 1997, and seek professional guidance to ensure their R&D claims are compliant

Why the R&D Tax Incentive Matters

To support this kind of innovation, the Research and Development Tax Incentive (R&DTI) offers a tax offset to eligible Australian companies that conduct experimental R&D activities. The incentive is designed to reduce the cost of developing new or improved products, processes, devices, or services, including those involving AI. It does this by offering a tax offset, and for some companies in tax losses, a tax refund.

Companies can access between 33.5% and 48.5% of their eligible R&D expenses back via this program. The exact refund portion depends on certain conditions.

The R&DTI encourages companies to invest in activities that involve technical uncertainty, require systematic experimentation, and have the purpose of generating new knowledge—exactly the type of work undertaken in much of today’s AI development.

So, is AI development eligible?

  • That’s the key question, and the short answer is: it depends.
  • The R&DTI framework is built on core principles that every claimant must understand:
  • core and supporting R&D activities
  • unknown outcomes and technical uncertainty
  • systematic progression of work
  • hypotheses, experiment, evaluation and conclusions
  • Many AI projects can meet these criteria, particularly those involving novel model development, advanced tuning, integration in unpredictable environments, or unexplored use cases. However, others that focus purely on deploying existing models or routine automation may not.
  • We use the term “AI” broadly to cover a range of subfields and methods, including:
  • machine learning (ML)
  • deep learning
  • natural language processing (NLP)
  • large language models (LLMs)
  • computer vision
  • reinforcement learning
  • generative and agentic systems, and
  • related intelligent or adaptive systems
  • We aim to help you understand how these activities interact with the R&DTI framework—what qualifies, what doesn’t, and how to frame your work to reflect the genuine R&D it may contain.

Common Pitfalls in AI R&D Claims

As a self-assessment program, the R&D Tax Incentive places responsibility on businesses to interpret the eligibility criteria and correctly assess their own AI activities. This creates the potential for misinterpretation and error.

This guide highlights the most common pitfalls encountered in AI-related claims, so you can make informed decisions and avoid costly mistakes.

Common Errors in AI Claims

cross with circle border icon

Failing to maintain contemporaneous documentation that supports the AI experimentation and associated expenditure

cross with circle border icon

Seeking to claim the development of AI-powered automation, dashboards or chatbots built for internal business processes (such as HR, finance or compliance)

cross with circle border icon

Failing to demonstrate that you developed a new AI tool or simply demonstrating that you used AI tools in the way they are intended to be used. This can include adapting and integrating off-the-shelf AL/ML solutions to suit your workflow without significant experimental development, performing basic prompt engineering or fine-tuning of pre-built LLMs

cross with circle border icon

Basic model retraining where expected outcomes are already well understood

cross with circle border icon

Confusing routine development and incremental improvements as innovation

cross with circle border icon

Claiming AI and software development work performed overseas by offshore or outsourced teams 

cross with circle border icon

Claiming R&D in one entity, while recording ownership of the resulting AI models, algorithms, or data assets in a different group company or IP-holding entity

Unlikely to be Core R&D Activities

  • bug fixing 
  • UI changes 
  • UAT 
  • data cleansing, mapping and migration 
  • routine feature engineering  
  • ongoing system maintenance   

Practical Guide

Understanding the R&D Tax Incentive Program

The R&DTI is Australia’s core mechanism for encouraging business investment in experimental R&D. It is administered jointly by The Department of Industry Science and Resources (DISR) and the Australian Taxation Office (ATO).

The program offers a tax offset for eligible R&D expenditure incurred by R&D entities. The incentive is intended to reduce the financial burden of genuine innovation, supporting activities that are uncertain, experimental and aimed at generating new knowledge, not merely routine improvements or technical maintenance.

Companies with aggregated turnover under $20 million may receive a refundable tax offset of 43.5 % of eligible expenditure (or 48.5% in some circumstances).

Companies with aggregated turnover of $20 million or more are eligible for a non-refundable offset, which will fall between 33.5% and 46.5%, depending on the proportion of R&D spend to total expenditure. This measure is known as R&D intensity.

These rates mean that for each dollar spent on eligible R&D, a company can reduce its tax burden. Loss making small companies can also receive a refund – which makes this program hugely popular among start-ups and scale-ups.

To qualify, R&D activities must meet key criteria:

target icon

They must involve a core R&D activity That is, one where the outcome is not known or readily determinable in advance based on existing knowledge or experience.

target icon

The work must involve experiments, with a systematic progression from hypothesis through to testing, observation and evaluation.

target icons

The purpose must be to generate new knowledge (either theoretical or applied) that will ultimately contribute to improved or novel products, processes, services or materials.

target icon

Supporting R&D activities, which are those undertaken for the purpose of supporting the core activity, may also be eligible.

Eligible R&D Expenses

Eligible R&D expenditure includes costs that are directly related to conducting your core and supporting R&D activities. Usually these will be:

dollar sign with circle border icon

Salary and wages for employees directly engaged in R&D, including software engineers and data scientists

dollar sign with circle border icon

Payments to contractors where the work relates to R&D and is conducted on behalf of the company

dollar sign with circle border icon

“Other” expenses, which can include rent, utilities, cloud hosting, and software licences

dollar sign with circle border icon

Depreciation on assets like computer and office equipment

dollar sign with circle border icon

Payments to registered research service providers, including universities or CSIRO

dollar sign with circle border icon

Feedstock, which is consumables used up in R&D, and is less common in the AI fields

Eligible AI Businesses

To access the R&D Tax Incentive, a business must first meet the basic eligibility requirements. Only Australian resident companies incorporated under Australian law (or foreign companies with a permanent establishment in Australia) can claim.

Sole traders, partnerships, and trusts (except public trading trusts with a corporate trustee) are not eligible.

Beyond entity type, the business must be conducting eligible R&D activities. For AI businesses, eligible activities may include developing novel model architectures, adapting existing algorithms to non-trivial new domains, designing new feature extraction or data labelling techniques where standard approaches fail, optimising model performance under constrained or variable conditions, or investigating unexpected behaviour in model generalisation.

Routine use of off-the-shelf models or tools, deployment of existing APIs, or software customisation for internal use are unlikely to qualify unless they form part of a broader experimental activity with technical uncertainty.

Have you come up with a brilliant and elaborate prompt for a GPT model? Unfortunately, that alone is unlikely to qualify. Using a pre-trained model in the way it was designed to be used typically does not involve any scientific or technical uncertainty, nor does it require systematic experimental evaluation, no matter how sophisticated the prompt. Without modifying, adapting, or assessing the model in a way that addresses a technical unknown, it's unlikely to meet the R&D criteria.

Eligible projects can span various fields, as long as the R&D activities meet the legislative definition. We are seeing successful claims in agriculture, logistics, finance, insurance, marketing, health and education, among other fields.

target with an arrow in the centre icon

Core R&D Activities

Core activities in AI typically involve experimental work undertaken to resolve a specific technical uncertainty that cannot be resolved with existing knowledge or standard practice. The work must follow a systematic approach, forming hypotheses, conducting tests, collecting observations, and drawing conclusions.

Examples of core AI R&D activities might include:

  • Testing whether a novel model architecture (e.g. transformer variant, diffusion model, graph neural network) can achieve targeted performance on a previously unmodelled or highly variable dataset 
  • Investigating unexplained model behaviours, such as bias amplification, performance degradation, or prediction instability, where known methods (e.g. fine-tuning, calibration, assembling) do not provide a reliable fix 
  • Designing a custom embedding, data representation or attention mechanism to integrate and reason over multi-source or multi-modal inputs (e.g. combining text, sensor data, and time-series signals) 
  • Evaluating reinforcement learning or other agent-based techniques in environments where input states are partially observable, highly dynamic, or sparsely labelled 
  • Experimenting with privacy-preserving techniques, such as synthetic data generation, differential privacy, or secure model deployment, to determine whether performance can be maintained without exposing sensitive data 

What qualifies these as core R&D is not just the complexity of the task, but the presence of technical uncertainty, where an experienced and qualified AI professional could not predict whether or how a desired outcome could be achieved. We also look at the genuine need for structured, hypothesis-driven experimentation to resolve that uncertainty.

loading icon

Supporting R&D Activities

Supporting activities are those that are directly related to a core R&D activity. In some cases, such as where the activity produces goods or services, or involves excluded software development, they must also be conducted for the dominant purpose of supporting the core R&D.

Examples of supporting activities in AI might include:

  • Pre-processing data or synthesising training datasets used in an experiment
  • Background research on publications, industry best practices, or learning to use new tools
  • Building test infrastructure or tuning model parameters prior to core testing
  • Running baseline models to establish control conditions

These tasks are not experimental in themselves, but they are necessary to conduct the core R&D. Importantly, they would not have been undertaken at all or in the same way if not for the core activity.

Correctly identifying and separating core and supporting activities is essential for both eligibility and compliance. Projects that lack a clearly defined core activity, or where all work is routine, are unlikely to qualify under the program.

Eligible AI Development Activities

AI development activities that may qualify for the R&D Tax Incentive are those that address genuine technical uncertainty, where results cannot be known in advance, and established techniques do not offer clear solutions.

These eligible activities typically involve:

  • Tackling problems where the outcomes are not predictable using current methods or industry best practices
  • Designing and training models where the relationship between variables and results is non-deterministic, requiring systematic testing and analysis
  • Developing, experimenting with, or applying novel algorithms, model architectures, or custom feature engineering techniques to new or complex challenges
  • Investigating how different data types, algorithms, and operating environments behave and interact, especially in scenarios with unknown or unpredictable results

Examples of eligible activities may include: 

  • Developing a brand new algorithm to spot rare events in medical images 
  • Creating a custom neural network to make sense of data from many different types of sensors 
  • Designing new ways to clean and connect messy, incomplete data 
  • Testing how your model works in totally new situations, where you can’t predict if it will succeed 

Understanding Ineligible Activities for the R&D Tax Incentive

While the R&D Tax Incentive is designed to support genuine innovation, not every AI project will qualify. The program specifically excludes certain types of work, particularly where the activities are routine, administrative, or do not involve significant technical uncertainty. Understanding these boundaries early helps ensure your claim is focused, compliant, and unlikely to face issues under review.

cross with circle as a border icon

Internal Use Software

Software developed mainly for a business’s own internal administration or for use by an affiliate is an excluded activity under the R&D Tax incentive. This includes AI-powered tools designed solely to support functions like payroll, HR, accounting, compliance reporting, and internal workflow management. 

Examples of excluded activities include:  

  • Deploying a chatbot to handle HR queries 
  • Designing an AI agent bot to onboard employees 
  • Using an algorithm to automate invoice processing 
  • Implementing AI-driven dashboards for monitoring performance metrics, KPIs or workflow status 
cross with circlular border icon

Digital Process Automation

Digital process automation refers to using software (including AI) to streamline, script, or automate steps in existing business processes. While these solutions are helpful and often use AI in a unique business sense, they are generally implemented using established techniques and tools, and the outcomes are predictable, excluding them from being eligible for the R&DTI program.  

Examples of digital process automation activities likely to be excluded include:  

  • integrating existing off the shelf SaaS tools using APIs 
  • using established LLM products for their intended purpose (i.e. prompt engineering) 
  • using low-code/no-code platforms  
cross icon with a circular border

Other Exclusions 

In addition to routine software development and automation, the following activities are not eligible as core activities under the R&D Tax Incentive: 

  • market research, market testing, market development, or sales promotion. This includes consumer surveys 
  • commercial, legal and administrative aspects of patenting, licensing or other activities  
  • activities associated with complying with statutory requirements  
  • research in social sciences, arts or humanities 

While these exclusions are less common in AI development, it is important to be aware of them for projects that may intersect with these areas. 

Still Not Sure if Your Activities Are Eligible? 

R&D tax legislation can be complex and the boundary between eligible and ineligible activities is not always obvious. If you are uncertain about your project’s eligibility, consulting a specialist advisor experienced in the software and AI sector (such as BlueRock) can help you clarify your position.  

Documentation and Record Keeping Requirements 

For the purposes of the R&D tax incentive, maintaining evidence and good documentation is not an afterthought or just a best practice. Recordkeeping is a mandatory requirement that directly determines your eligibility for the R&D Tax Incentive. If you can’t produce the right records, your claim may be denied or reduced. 

All records must be contemporaneous - meaning they need to be created as the work and spending happen, not produced later in response to a review or audit. 

If you’re new to the program, this might sound daunting, but it doesn’t have to be. With the right processes and support, effective record-keeping can be straightforward and built into your workflow from day one. 

document icon with checkmark in the centre

Evidence for Core R&D Activities

For core activities, you must retain evidence that (1) a genuine technical uncertainty existed; and (2) the experimental work was undertaken to investigate this uncertainty using a hypothesis-driven approach.  

This documentation should demonstrate how your experiments were structured, executed, and evaluated. 

Examples of suitable records: 

  • Project briefs or technical design documents Confluence project pages outlining the technical problem 
  • Research literature reviews Saved PDFs or annotated summaries referencing peer-reviewed papers or arXiv preprints showing the knowledge gap 
  • Experiment plans and protocols Jupyter Notebooks or markdown files in your codebase defining model parameters, data splits, and hypotheses. 
  • Source code repositories and version control logs GitHub or GitLab commit histories, with branches and pull requests reflecting variations of model development and experimental runs. 
  • Model training logs or output files TensorBoard visualisations, MLflow tracking runs and CSV exports showing metrics such as F1 score, accuracy, precision, recall, confusion matrices for each experiment cycle. 
  • Analysis reports PowerPoint presentations, Google Slides, or internal Notion pages summarising outcomes, failures, and evolution of the investigation. 
document icon with checkmark in the centre

Evidence for Supporting Activities

Supporting activities require documentation showing how each task directly relates to, and enables the core R&D activity, as well as confirmation that the work was actually performed. 

Examples of suitable records: 

  • Task management recordsJira tickets, Asana tasks, or Trello cards linking activities such as data cleaning or feature engineering to specific core R&D experiments.  
  • Meeting notes or planning documentsNotes in Notion, Confluence or Google Docs recording discussions about infrastructure setup. 
  • Chat histories reflecting technical collaboration and decisionsSlack, Microsoft Teams messages or email chains with correspondence.  
  • Maintenance logsRecords from system dashboards, DevOps logs or custom checklists that track updates and fixes in response to R&D testing requirements.  
document icon with checkmark in the centre

Evidence for R&D expenditure

For each expense claimed, you must be able to substantiate that it was incurred, and it has a direct connection to an R&D activity. 

Examples of suitable records: 

  • Payroll and timesheet records Entries in systems such as Employment Hero, Deputy, Excel, Harvest, or Toggl, mapping staff hours directly to R&D sprints or experiment cycles. 
  • Invoices Dated and itemised documents specifying the amount payable, payment terms, and a clear description of the work performed, directly linked to experimental milestones or R&D activities. 
  • Contractor Agreements Signed and dated agreements that clearly outline the nature, scope, and relevance of contract work to core or supporting R&D activities. 
  • Bank statements Official bank feeds, statements, or remittance slips showing actual payments made for eligible R&D expenses. 
  • Calculations and logbooks Spreadsheets, project diaries, allocation workbooks, or digital logbooks that support how shared costs (such as cloud services or staff time) are reasonably apportioned to R&D activities.

Case Study 1: AI-Powered SaaS Tool for Hospital Operations

An abstract landscape metaphorically depicting a journey and foresight

This case study is a fictional amalgamation of the work of real BlueRock clients.

Business Background and Objectives 

ClarityB1 is a Melbourne-based SaaS company developing an AI-powered operational intelligence platform designed specifically for use by hospitals. Their product aims to unify complex, disparate datasets into a secure environment accessible via a natural language interface, spanning patient admissions, staff rosters, clinical activity, facilities management, and procurement. Their business model is licensing the product to hospitals to use as a management tool. 

Through this interface, staff ranging from administrators to clinicians and executive teams can ask context-specific operational questions, such as: 

“Which wards are most likely to require agency staff in the next fortnight?” 

“What was the average turnaround time on critical maintenance requests during Q1?” 

“How many nursing shifts were missed or rescheduled over Christmas last year?” 

“Where are the biggest variances between forecast and actual bed usage in the past three weeks?” 

The company’s R&D objectives have focused on: 

  • Developing a reliable and explainable AI system capable of interpreting ambiguous queries across different user types 
  • Safely modelling operational scenarios without compromising patient privacy 
  • Testing whether their model could predict staffing load variances during periods of abnormal demand (e.g. December–January holiday season) 
  • Designing a data ingestion pipeline that could operate securely across multiple hospital systems, each with different formats, levels of completeness and governance rules 
target with arrow in the centre icon

Core R&D Activities

Core Activity 1: Development and testing of a generalisable query engine across variable clinical datasets 

The team designed a novel approach to parse natural language queries from hospital staff and resolve them into structured representations across heterogeneous datasets. The experiment sought to determine whether a custom intent-classification and entity-mapping model could perform consistently across both structured (e.g. rosters, schedules) and semi-structured data (e.g. free-text reports, maintenance logs), in an environment with variable schema definitions and inconsistent terminology.

Core Activity 2: Experimental development of a privacy-preserving data modelling framework 

ClarityB1 tested whether it could forecast resource loads, particularly nursing staff requirements, without accessing or exposing personally identifiable information. This involved developing and evaluating new methods for injecting synthetic or obfuscated data into training workflows while maintaining model fidelity. The outcome of this activity could not be determined in advance due to the unknown effects of these techniques on prediction accuracy and model drift. 

loading icon

Supporting R&D Activities

Supporting Activity 1: Integration and standardisation of clinical and operational data sources

Before experiments could be run, the team built tools to ingest, clean and transform disparate datasets from partner hospitals. While this work was not experimental in itself, it was directly related to and necessary for the core activities involving model development and testing.

Supporting Activity 2: Development of internal testing environment and MLOps infrastructure

The team designed a secure, cloud-based environment to isolate and run experimental models. This environment allowed for version control, rollback, and auditing of data access, and supported traceability during testing. This infrastructure was developed specifically to support core R&D experiments.

R&D Expenditure 

ClarityB1 claimed the following eligible R&D costs: 

Salaries

  • Head of Engineering: 1,000 hours (55% of a $200,000 package)
  • Data Scientist: 1,635 hours (90% of a $175,000 package)
  • Software Developers (3): 2,180 hours total (average of 40% of their $160,000 salaries)

Contractor

Fee of $40,000 paid to an external security and cloud computing specialist to advise on secure model deployment and validate the synthetic data environment used in privacy-preserving R&D experiments.

Other Expenditure

  • Rent: 25% of annual office rent attributed to R&D employees 
  • Hosting and cloud services: $15,000 for the secure R&D environment used for model development and testing 

The total R&D expenditure was $529,500, and the company was in a tax loss position, which meant it claimed the full refundable offset for a cash payment of $230,333. 

Records Maintained

ClarityB1 kept a set of contemporaneous (i.e. completed in real time) records, including:

  • Time records completed in Google Sheets every Friday by technical staff to track R&D effort
  • Task management records maintained in JIRA, with R&D activities tagged and linked to hypotheses
  • Source code repositories managed in GitHub, with version histories, experimental branches, and commit logs
  • Model training logs and evaluation metrics captured using MLflow, documenting input parameters, outputs, and performance results
  • Meeting notes and technical design documentation recorded in Notion, capturing design decisions, testing plans, and hypothesis discussions
  • Contracts, invoices, and pay runs managed through Xero

How BlueRock Helped

An abstract landscape metaphorically depicting a journey and foresight

ClarityB1 engaged BlueRock’s Grants & Incentives team to help ensure their R&D claim was fully compliant and robust. The BlueRock team: 

  • Established eligibility of the company to the program 
  • Advised the leadership team on the scope of the R&D Tax Incentive and its application to AI software 
  • Reviewed existing records and guided improvements to documentation practices 
  • Identified and validated the company’s core and supporting R&D activities 
  • Prepared a detailed and technically sound application to AusIndustry 
  • Calculated eligible expenditure, ensuring correct apportionment and treatment of costs 
  • Issued a tax advice letter for the company’s directors, outlining the basis for the claim 
  • Prepared the R&D schedule for the company’s tax return 

Case Study 2: Development of an ML Model for Concrete Defect Detection in Motorsport Racetracks 

An abstract landscape metaphorically depicting a journey and foresight

This case study is a fictional amalgamation of the work of real BlueRock clients.

Business Background and Objectives

TrackScan Technologies is a Sydney-based company developing an automated inspection platform for racetrack maintenance and safety teams. Their core product is an ML model designed to analyse high-resolution images captured by specialised drone or vehicle-mounted cameras and provide an automated severity and classification score for concrete defects such as cracks, spalling, and alkali-silica reaction (ASR) damage. 

The platform aims to assist track engineers in triaging maintenance needs and improving the consistency and speed of track surface assessment. Key functionalities include: 

  • Rapid assessment of defect type and severity from uploaded high-resolution track surface images 
  • Highlighting image regions that contain defects to guide ground crews 
  • Tracking the growth and change of specific defect areas over time from sequential inspection runs 

The company's R&D objectives have focused on: 

  • Developing a reliable and robust Convolutional Neural Network capable of accurately classifying different defect types in diverse, real-world images (e.g., wet, dry, high-texture, low-texture surfaces) 
  • Addressing the challenge of data imbalance, where minor hairline cracks far outnumber severe structural defects (like deep spalling) in inspection datasets 
  • Testing whether their model could maintain high performance across images captured at different speeds, angles, and varying light/weather conditions (domain generalisation) 
  • Designing an efficient, real-time data pipeline to process the massive volumes of high-resolution image data generated during a typical track inspection run 
target with arrow in the centre icon

Core R&D Activities

Core Activity 1: Experimental development of a novel CNN architecture for robust defect feature extraction 

The team designed and tested a custom deep learning model architecture to improve the extraction of subtle, diagnostically relevant features (e.g., crack width, spalling depth, ASR 'map cracking' patterns) from the complex, textured racetrack images compared to standard architectures (e.g., VGG, EfficientNet). The experiment sought to determine if this new architecture could achieve a statistically significant higher Intersection over Union score for pixel-level defect segmentation than baseline models when applied to an independent test set comprising images from various international circuits and different inspection hardware.

Core Activity 2: Development and testing of a data augmentation and transfer learning strategy to mitigate data imbalance 

TrackScan Technologies tested various transfer learning and advanced data augmentation techniques (such as geometric distortions mimicking camera shake, texture and colour variation simulating water and oil stains) to determine the optimal strategy for improving model training efficiency and prediction accuracy for the rare but critical severe defect classes. The uncertainty lay in identifying which combination of techniques would effectively synthesise new data points and rebalance the dataset without introducing artifacts that lead to spurious correlations or misclassification of non-defect track features like painted lines. 

loading icon

Supporting R&D Activities

Supporting Activity 1: Curation, standardisation, and annotation of track surface image datasets

Before experiments could be run, the team built tools to ingest, clean, and normalise the ultra-high-resolution images from partner racetracks, ensuring consistency in resolution, file format, and GPS metadata. This involved working with expert track engineers and civil materials scientists to develop and apply a standardised segmentation protocol for defect boundaries and classification labels (e.g., hairline crack vs. fatigue crack vs. spalling). While the annotation and cleaning were not experimental, they were directly related to and necessary for the core activities involving model training and testing. 

Supporting Activity 2: Development of internal testing environment and MLOps infrastructure 

The team designed a secure, high-performance, cloud-based environment to isolate and run large-scale experimental models, specifically to manage the complex training jobs required for deep learning on multi-gigapixel track images. This infrastructure included automated tools for GPU resource allocation, hyperparameter tuning, model version control (MLflow), and supporting the traceability of the image data and associated environmental conditions (e.g., ambient temperature, humidity) used during testing. This setup was developed specifically to support the core R&D experiments. 

Records Maintained

  • Weekly technical logs recorded in Airtable by engineering staff, detailing time spent on core and supporting R&D tasks, annotated with experiment IDs and activity categories 
  • Experiment tracking and version control managed using Weights & Biases, capturing model architecture changes, training metrics (e.g. IoU, precision, recall), and data augmentation configurations across experimental runs 
  • Image dataset annotations and revisions stored in CVAT, with audit trails of contributor input, label versions, and segmentation schema updates 
  • Cloud training environment logs automatically captured through AWS, including hardware utilisation, runtime parameters, and model artefacts tied to specific Git commit hashes 
  • Design rationales, test plans, and post-experiment evaluations documented in Confluence, including links to experiment dashboards, internal peer reviews, and data validation outcomes 
  • Financial records and contractor invoices managed through MYOB, tagged against R&D cost centres aligned with eligible experimental activity streams 

Why Specialist Expertise Matters when Claiming R&D 

For companies within the software and AI industry, the R&D Tax Incentive offers a powerful way to recoup development costs and accelerate innovation. However, with regulatory requirements for the program frequently changing and scrutiny of AI claims increasing, demonstrating genuine technical uncertainty and eligibility in this field is more challenging than ever.  

Specialist guidance from trusted advisors such as BlueRock can make a critical difference at every stage of your R&D journey, helping to navigate the complex and shifting rules.

What You Gain From Engaging a Trusted Advisor

Advisors have deep technical understanding of the nuances of the program, and know how to frame your innovation in line with legislation. By engaging a specialised advisor, you can have confidence that your AI or software projects meets the R&D Tax Incentive’s strict technical criteria.  

As expert advisors we can help you interpret evolving regulations, structure your documentation, and reduce the risk of errors, audits, or rejected claims, so you can move forward with certainty. 

With in-depth knowledge of the rules specific to software and AI, as expert advisors we can identify all eligible activities and costs, ensuring you claim the maximum benefits you're entitled to, without overstepping compliance boundaries.  

This means you won’t leave valuable incentives on the table or face penalties from misinterpreting the fine print. 

Navigating the R&D claim process is complex and time-consuming, especially for technical teams used to building products, not writing detailed applications.  

By handing over the claim preparation, you free up your team to concentrate on innovation and commercial impact, instead of getting bogged down in forms and deadlines. 

When to Engage an Advisor

target with arrow in centre icon

Early scoping

Before commencing your project or committing significant resources, to determine which planned activities are genuinely eligible. 

tools setting icon

During project delivery

To ensure ongoing documentation aligns with regulatory requirements and to adapt processes in response to changing business needs. 

arrow icon pointing to a curved, vertically oriented line on the right

At registration and claim preparation

When compiling technical information and financial data for submission, and before lodging your Company Tax Return. 

The BlueRock Advantage

eye preview icon

An eye for detail 

We take a thorough, hands-on approach to preparing R&D claims, spending time on the small details so your team doesn’t have to. Our process is designed to minimise disruption while capturing the information needed to support a well-documented and defensible claim.

"i" icon with a circular border

Relatable, knowledgeable team 

Our Grants and Incentives team is made up of specialists with deep experience across science, technology, and commercial fields. This allows us to translate your technical concepts into clear, compelling applications that speak directly to assessors. 

heart icon with circular border

A holistic approach 

As a multidisciplinary firm, BlueRock unites specialists in R&D, accounting, legal, digital, and commercial fields - all under one roof. This integrated approach ensures every aspect of your claim is expertly managed, from project scoping through to tax strategy and compliance. Whatever your question or concern, we have the right expert on hand to support you.

hand reaching out to a floating heart icon

Relationships come first

We prioritise open, supportive partnerships with our clients, working closely with founders and technical teams. Your goals are our goals, and we go the extra mile to help you succeed. 

With BlueRock as your advisor, you gain a trusted partner committed to transparency, technical depth, and outstanding client outcomes, so you can innovate boldly, knowing your R&D investment is protected and supported at every stage.

Let's Fetch You Some Funding

With BlueRock as your advisor, you gain a trusted partner committed to transparency, technical depth, and outstanding client outcomes, so you can innovate boldly, knowing your R&D investment is protected and supported at every stage.

arrow icon

get in touch

header image of a human with an old computer as their head doing work on a newer computer/laptop.

AI & the R&D Tax Incentive

A Practical Guide

How AI Development Engages with Australia’s R&D Tax Incentive

There aren’t many fields of business still untouched by artificial intelligence (AI) today. From agriculture to mining, healthcare to defence, and marketing to financial services, AI is transforming established practices by automating processes, generating content and improving decision-making. While AI as a research field has existed for decades, the public release of advanced foundation models such as ChatGPT in November 2022, along with other generative and agentic AI systems, has accelerated both the pace of development and its adoption across sectors. As these tools move from theory and research into real-world applications, Australian businesses are racing to build, integrate and adapt them—seeking competitive advantage in what has become a modern-day gold rush of technological innovation.

Contents

  1. 01

    Practical Guide

  2. 02

    Case Study 1: AI-Powered SaaS Tool for Hospital Operations

  3. 03

    Case Study 2: Development of an ML Model for Concrete Defect Detection in Motorsport Racetracks

  4. 04

    Why Specialist Expertise Matters When Claiming R&D

Purpose of This Guide

This guide is for Australian companies or foreign corporations working on AI projects in Australia who want to:

  • understand how the R&DTI applies specifically to AI development
  • correctly identify and document eligible core and supporting activities
  • avoid common errors in claim preparation, and
  • maximise the benefit of the incentive while remaining compliant with legislative requirements

Throughout this guide, we reference examples, case studies, and official guidance materials to provide context. We also draw on real-world scenarios to illustrate how AI companies can navigate the line between eligibility and ineligibility, especially when using off-the-shelf models, APIs, or datasets.

You’ll learn how these activities interact with the R&DTI framework, what qualifies, what does not, and how to frame your work to reflect the genuine R&D it may contain.

Readers should not rely on this guide as formal advice. It is intended for general information only and is not a substitute for financial, legal or tax advisory services. Advice you rely on to make important commercial, legal and tax decisions should always be tailored to your specific circumstance. Companies should review the relevant legislation, including Division 355 of the Income Tax Assessment Act 1997, and seek professional guidance to ensure their R&D claims are compliant

Why the R&D Tax Incentive Matters

To support this kind of innovation, the Research and Development Tax Incentive (R&DTI) offers a tax offset to eligible Australian companies that conduct experimental R&D activities. The incentive is designed to reduce the cost of developing new or improved products, processes, devices, or services, including those involving AI. It does this by offering a tax offset, and for some companies in tax losses, a tax refund.

Companies can access between 33.5% and 48.5% of their eligible R&D expenses back via this program. The exact refund portion depends on certain conditions.

The R&DTI encourages companies to invest in activities that involve technical uncertainty, require systematic experimentation, and have the purpose of generating new knowledge—exactly the type of work undertaken in much of today’s AI development.

So, is AI development eligible?

  • That’s the key question, and the short answer is: it depends.
  • The R&DTI framework is built on core principles that every claimant must understand:
  • core and supporting R&D activities
  • unknown outcomes and technical uncertainty
  • systematic progression of work
  • hypotheses, experiment, evaluation and conclusions
  • Many AI projects can meet these criteria, particularly those involving novel model development, advanced tuning, integration in unpredictable environments, or unexplored use cases. However, others that focus purely on deploying existing models or routine automation may not.
  • We use the term “AI” broadly to cover a range of subfields and methods, including:
  • machine learning (ML)
  • deep learning
  • natural language processing (NLP)
  • large language models (LLMs)
  • computer vision
  • reinforcement learning
  • generative and agentic systems, and
  • related intelligent or adaptive systems
  • We aim to help you understand how these activities interact with the R&DTI framework—what qualifies, what doesn’t, and how to frame your work to reflect the genuine R&D it may contain.

Common Pitfalls in AI R&D Claims

As a self-assessment program, the R&D Tax Incentive places responsibility on businesses to interpret the eligibility criteria and correctly assess their own AI activities. This creates the potential for misinterpretation and error.

This guide highlights the most common pitfalls encountered in AI-related claims, so you can make informed decisions and avoid costly mistakes.

Common Errors in AI Claims

cross with a circle border icon

Failing to maintain contemporaneous documentation that supports the AI experimentation and associated expenditure

cross with a circle border icon

Seeking to claim the development of AI-powered automation, dashboards or chatbots built for internal business processes (such as HR, finance or compliance)

cross with a circle border icon

Failing to demonstrate that you developed a new AI tool or simply demonstrating that you used AI tools in the way they are intended to be used. This can include adapting and integrating off-the-shelf AL/ML solutions to suit your workflow without significant experimental development, performing basic prompt engineering or fine-tuning of pre-built LLMs

cross with a circle border icon

Basic model retraining where expected outcomes are already well understood

cross with a circle border icon

Confusing routine development and incremental improvements as innovation

cross with a circle border icon

Claiming AI and software development work performed overseas by offshore or outsourced teams 

cross with a circle border icon

Claiming R&D in one entity, while recording ownership of the resulting AI models, algorithms, or data assets in a different group company or IP-holding entity

Unlikely to be Core R&D Activities

  • bug fixing 
  • UI changes 
  • UAT 
  • data cleansing, mapping and migration 
  • routine feature engineering  
  • ongoing system maintenance   

Practical Guide

Understanding the R&D Tax Incentive Program

The R&DTI is Australia’s core mechanism for encouraging business investment in experimental R&D. It is administered jointly by The Department of Industry Science and Resources (DISR) and the Australian Taxation Office (ATO).

The program offers a tax offset for eligible R&D expenditure incurred by R&D entities. The incentive is intended to reduce the financial burden of genuine innovation, supporting activities that are uncertain, experimental and aimed at generating new knowledge, not merely routine improvements or technical maintenance.

Companies with aggregated turnover under $20 million may receive a refundable tax offset of 43.5 % of eligible expenditure (or 48.5% in some circumstances).

Companies with aggregated turnover of $20 million or more are eligible for a non-refundable offset, which will fall between 33.5% and 46.5%, depending on the proportion of R&D spend to total expenditure. This measure is known as R&D intensity.

These rates mean that for each dollar spent on eligible R&D, a company can reduce its tax burden. Loss making small companies can also receive a refund – which makes this program hugely popular among start-ups and scale-ups.

To qualify, R&D activities must meet key criteria:

target icon

They must involve a core R&D activity That is, one where the outcome is not known or readily determinable in advance based on existing knowledge or experience.

target icon

The work must involve experiments, with a systematic progression from hypothesis through to testing, observation and evaluation.

target icon

The purpose must be to generate new knowledge (either theoretical or applied) that will ultimately contribute to improved or novel products, processes, services or materials.

target icon

Supporting R&D activities, which are those undertaken for the purpose of supporting the core activity, may also be eligible.

Eligible R&D Expenses

Eligible R&D expenditure includes costs that are directly related to conducting your core and supporting R&D activities. Usually these will be:

dollar sign with a circle border icon

Salary and wages for employees directly engaged in R&D, including software engineers and data scientists

dollar sign with a circle border icon

Payments to contractors where the work relates to R&D and is conducted on behalf of the company

dollar sign with circle border icon

“Other” expenses, which can include rent, utilities, cloud hosting, and software licences

dollar sign with circle border icon

Depreciation on assets like computer and office equipment

dollar sign with circle border icon

Payments to registered research service providers, including universities or CSIRO

dollar sign with circle border icon

Feedstock, which is consumables used up in R&D, and is less common in the AI fields

Eligible AI Businesses

To access the R&D Tax Incentive, a business must first meet the basic eligibility requirements. Only Australian resident companies incorporated under Australian law (or foreign companies with a permanent establishment in Australia) can claim.

Sole traders, partnerships, and trusts (except public trading trusts with a corporate trustee) are not eligible.

Beyond entity type, the business must be conducting eligible R&D activities. For AI businesses, eligible activities may include developing novel model architectures, adapting existing algorithms to non-trivial new domains, designing new feature extraction or data labelling techniques where standard approaches fail, optimising model performance under constrained or variable conditions, or investigating unexpected behaviour in model generalisation.

Routine use of off-the-shelf models or tools, deployment of existing APIs, or software customisation for internal use are unlikely to qualify unless they form part of a broader experimental activity with technical uncertainty.

Have you come up with a brilliant and elaborate prompt for a GPT model? Unfortunately, that alone is unlikely to qualify. Using a pre-trained model in the way it was designed to be used typically does not involve any scientific or technical uncertainty, nor does it require systematic experimental evaluation, no matter how sophisticated the prompt. Without modifying, adapting, or assessing the model in a way that addresses a technical unknown, it's unlikely to meet the R&D criteria.

Eligible projects can span various fields, as long as the R&D activities meet the legislative definition. We are seeing successful claims in agriculture, logistics, finance, insurance, marketing, health and education, among other fields.

target with arrow in the centre icon

Core R&D Activities

Core activities in AI typically involve experimental work undertaken to resolve a specific technical uncertainty that cannot be resolved with existing knowledge or standard practice. The work must follow a systematic approach, forming hypotheses, conducting tests, collecting observations, and drawing conclusions.

Examples of core AI R&D activities might include:

  • Testing whether a novel model architecture (e.g. transformer variant, diffusion model, graph neural network) can achieve targeted performance on a previously unmodelled or highly variable dataset 
  • Investigating unexplained model behaviours, such as bias amplification, performance degradation, or prediction instability, where known methods (e.g. fine-tuning, calibration, assembling) do not provide a reliable fix 
  • Designing a custom embedding, data representation or attention mechanism to integrate and reason over multi-source or multi-modal inputs (e.g. combining text, sensor data, and time-series signals) 
  • Evaluating reinforcement learning or other agent-based techniques in environments where input states are partially observable, highly dynamic, or sparsely labelled 
  • Experimenting with privacy-preserving techniques, such as synthetic data generation, differential privacy, or secure model deployment, to determine whether performance can be maintained without exposing sensitive data 

What qualifies these as core R&D is not just the complexity of the task, but the presence of technical uncertainty, where an experienced and qualified AI professional could not predict whether or how a desired outcome could be achieved. We also look at the genuine need for structured, hypothesis-driven experimentation to resolve that uncertainty.

loading icon

Supporting R&D Activities

Supporting activities are those that are directly related to a core R&D activity. In some cases, such as where the activity produces goods or services, or involves excluded software development, they must also be conducted for the dominant purpose of supporting the core R&D.

Examples of supporting activities in AI might include:

  • Pre-processing data or synthesising training datasets used in an experiment
  • Background research on publications, industry best practices, or learning to use new tools
  • Building test infrastructure or tuning model parameters prior to core testing
  • Running baseline models to establish control conditions

These tasks are not experimental in themselves, but they are necessary to conduct the core R&D. Importantly, they would not have been undertaken at all or in the same way if not for the core activity.

Correctly identifying and separating core and supporting activities is essential for both eligibility and compliance. Projects that lack a clearly defined core activity, or where all work is routine, are unlikely to qualify under the program.

Eligible AI Development Activities

AI development activities that may qualify for the R&D Tax Incentive are those that address genuine technical uncertainty, where results cannot be known in advance, and established techniques do not offer clear solutions.

These eligible activities typically involve:

  • Tackling problems where the outcomes are not predictable using current methods or industry best practices
  • Designing and training models where the relationship between variables and results is non-deterministic, requiring systematic testing and analysis
  • Developing, experimenting with, or applying novel algorithms, model architectures, or custom feature engineering techniques to new or complex challenges
  • Investigating how different data types, algorithms, and operating environments behave and interact, especially in scenarios with unknown or unpredictable results

Examples of eligible activities may include: 

  • Developing a brand new algorithm to spot rare events in medical images 
  • Creating a custom neural network to make sense of data from many different types of sensors 
  • Designing new ways to clean and connect messy, incomplete data 
  • Testing how your model works in totally new situations, where you can’t predict if it will succeed 

Understanding Ineligible Activities for the R&D Tax Incentive

While the R&D Tax Incentive is designed to support genuine innovation, not every AI project will qualify. The program specifically excludes certain types of work, particularly where the activities are routine, administrative, or do not involve significant technical uncertainty. Understanding these boundaries early helps ensure your claim is focused, compliant, and unlikely to face issues under review.

cross with circle border icon

Internal Use Software

Software developed mainly for a business’s own internal administration or for use by an affiliate is an excluded activity under the R&D Tax incentive. This includes AI-powered tools designed solely to support functions like payroll, HR, accounting, compliance reporting, and internal workflow management. 

Examples of excluded activities include:  

  • Deploying a chatbot to handle HR queries 
  • Designing an AI agent bot to onboard employees 
  • Using an algorithm to automate invoice processing 
  • Implementing AI-driven dashboards for monitoring performance metrics, KPIs or workflow status 
cross with circle border icon

Digital Process Automation

Digital process automation refers to using software (including AI) to streamline, script, or automate steps in existing business processes. While these solutions are helpful and often use AI in a unique business sense, they are generally implemented using established techniques and tools, and the outcomes are predictable, excluding them from being eligible for the R&DTI program.  

Examples of digital process automation activities likely to be excluded include:  

  • integrating existing off the shelf SaaS tools using APIs 
  • using established LLM products for their intended purpose (i.e. prompt engineering) 
  • using low-code/no-code platforms  
cross with circle border icon

Other Exclusions 

In addition to routine software development and automation, the following activities are not eligible as core activities under the R&D Tax Incentive: 

  • market research, market testing, market development, or sales promotion. This includes consumer surveys 
  • commercial, legal and administrative aspects of patenting, licensing or other activities  
  • activities associated with complying with statutory requirements  
  • research in social sciences, arts or humanities 

While these exclusions are less common in AI development, it is important to be aware of them for projects that may intersect with these areas. 

Still Not Sure if Your Activities Are Eligible? 

R&D tax legislation can be complex and the boundary between eligible and ineligible activities is not always obvious. If you are uncertain about your project’s eligibility, consulting a specialist advisor experienced in the software and AI sector (such as BlueRock) can help you clarify your position.  

Documentation and Record Keeping Requirements 

For the purposes of the R&D tax incentive, maintaining evidence and good documentation is not an afterthought or just a best practice. Recordkeeping is a mandatory requirement that directly determines your eligibility for the R&D Tax Incentive. If you can’t produce the right records, your claim may be denied or reduced. 

All records must be contemporaneous - meaning they need to be created as the work and spending happen, not produced later in response to a review or audit. 

If you’re new to the program, this might sound daunting, but it doesn’t have to be. With the right processes and support, effective record-keeping can be straightforward and built into your workflow from day one. 

document icon with checkmark in the centre

Evidence for Core R&D Activities

For core activities, you must retain evidence that (1) a genuine technical uncertainty existed; and (2) the experimental work was undertaken to investigate this uncertainty using a hypothesis-driven approach.  

This documentation should demonstrate how your experiments were structured, executed, and evaluated. 

Examples of suitable records: 

  • Project briefs or technical design documents Confluence project pages outlining the technical problem 
  • Research literature reviews Saved PDFs or annotated summaries referencing peer-reviewed papers or arXiv preprints showing the knowledge gap 
  • Experiment plans and protocols Jupyter Notebooks or markdown files in your codebase defining model parameters, data splits, and hypotheses. 
  • Source code repositories and version control logs GitHub or GitLab commit histories, with branches and pull requests reflecting variations of model development and experimental runs. 
  • Model training logs or output files TensorBoard visualisations, MLflow tracking runs and CSV exports showing metrics such as F1 score, accuracy, precision, recall, confusion matrices for each experiment cycle. 
  • Analysis reports PowerPoint presentations, Google Slides, or internal Notion pages summarising outcomes, failures, and evolution of the investigation. 
document icon with checkmark in the centre

Evidence for Supporting Activities

Supporting activities require documentation showing how each task directly relates to, and enables the core R&D activity, as well as confirmation that the work was actually performed. 

Examples of suitable records: 

  • Task management recordsJira tickets, Asana tasks, or Trello cards linking activities such as data cleaning or feature engineering to specific core R&D experiments.  
  • Meeting notes or planning documentsNotes in Notion, Confluence or Google Docs recording discussions about infrastructure setup. 
  • Chat histories reflecting technical collaboration and decisionsSlack, Microsoft Teams messages or email chains with correspondence.  
  • Maintenance logsRecords from system dashboards, DevOps logs or custom checklists that track updates and fixes in response to R&D testing requirements.  
document icon with checkmark in the centre

Evidence for R&D expenditure

For each expense claimed, you must be able to substantiate that it was incurred, and it has a direct connection to an R&D activity. 

Examples of suitable records: 

  • Payroll and timesheet records Entries in systems such as Employment Hero, Deputy, Excel, Harvest, or Toggl, mapping staff hours directly to R&D sprints or experiment cycles. 
  • Invoices Dated and itemised documents specifying the amount payable, payment terms, and a clear description of the work performed, directly linked to experimental milestones or R&D activities. 
  • Contractor Agreements Signed and dated agreements that clearly outline the nature, scope, and relevance of contract work to core or supporting R&D activities. 
  • Bank statements Official bank feeds, statements, or remittance slips showing actual payments made for eligible R&D expenses. 
  • Calculations and logbooks Spreadsheets, project diaries, allocation workbooks, or digital logbooks that support how shared costs (such as cloud services or staff time) are reasonably apportioned to R&D activities.

Case Study 1: AI-Powered SaaS Tool for Hospital Operations

An abstract landscape metaphorically depicting a journey and foresight

This case study is a fictional amalgamation of the work of real BlueRock clients.

Business Background and Objectives 

ClarityB1 is a Melbourne-based SaaS company developing an AI-powered operational intelligence platform designed specifically for use by hospitals. Their product aims to unify complex, disparate datasets into a secure environment accessible via a natural language interface, spanning patient admissions, staff rosters, clinical activity, facilities management, and procurement. Their business model is licensing the product to hospitals to use as a management tool. 

Through this interface, staff ranging from administrators to clinicians and executive teams can ask context-specific operational questions, such as: 

“Which wards are most likely to require agency staff in the next fortnight?” 

“What was the average turnaround time on critical maintenance requests during Q1?” 

“How many nursing shifts were missed or rescheduled over Christmas last year?” 

“Where are the biggest variances between forecast and actual bed usage in the past three weeks?” 

The company’s R&D objectives have focused on: 

  • Developing a reliable and explainable AI system capable of interpreting ambiguous queries across different user types 
  • Safely modelling operational scenarios without compromising patient privacy 
  • Testing whether their model could predict staffing load variances during periods of abnormal demand (e.g. December–January holiday season) 
  • Designing a data ingestion pipeline that could operate securely across multiple hospital systems, each with different formats, levels of completeness and governance rules 
target with arrow in the centre icon

Core R&D Activities

Core Activity 1: Development and testing of a generalisable query engine across variable clinical datasets 

The team designed a novel approach to parse natural language queries from hospital staff and resolve them into structured representations across heterogeneous datasets. The experiment sought to determine whether a custom intent-classification and entity-mapping model could perform consistently across both structured (e.g. rosters, schedules) and semi-structured data (e.g. free-text reports, maintenance logs), in an environment with variable schema definitions and inconsistent terminology.

Core Activity 2: Experimental development of a privacy-preserving data modelling framework 

ClarityB1 tested whether it could forecast resource loads, particularly nursing staff requirements, without accessing or exposing personally identifiable information. This involved developing and evaluating new methods for injecting synthetic or obfuscated data into training workflows while maintaining model fidelity. The outcome of this activity could not be determined in advance due to the unknown effects of these techniques on prediction accuracy and model drift. 

loading icon

Supporting R&D Activities

Supporting Activity 1: Integration and standardisation of clinical and operational data sources

Before experiments could be run, the team built tools to ingest, clean and transform disparate datasets from partner hospitals. While this work was not experimental in itself, it was directly related to and necessary for the core activities involving model development and testing.

Supporting Activity 2: Development of internal testing environment and MLOps infrastructure

The team designed a secure, cloud-based environment to isolate and run experimental models. This environment allowed for version control, rollback, and auditing of data access, and supported traceability during testing. This infrastructure was developed specifically to support core R&D experiments.

R&D Expenditure 

ClarityB1 claimed the following eligible R&D costs: 

Salaries

  • Head of Engineering: 1,000 hours (55% of a $200,000 package)
  • Data Scientist: 1,635 hours (90% of a $175,000 package)
  • Software Developers (3): 2,180 hours total (average of 40% of their $160,000 salaries)

Contractor

Fee of $40,000 paid to an external security and cloud computing specialist to advise on secure model deployment and validate the synthetic data environment used in privacy-preserving R&D experiments.

Other Expenditure

  • Rent: 25% of annual office rent attributed to R&D employees 
  • Hosting and cloud services: $15,000 for the secure R&D environment used for model development and testing 

The total R&D expenditure was $529,500, and the company was in a tax loss position, which meant it claimed the full refundable offset for a cash payment of $230,333. 

Records Maintained

ClarityB1 kept a set of contemporaneous (i.e. completed in real time) records, including:

  • Time records completed in Google Sheets every Friday by technical staff to track R&D effort
  • Task management records maintained in JIRA, with R&D activities tagged and linked to hypotheses
  • Source code repositories managed in GitHub, with version histories, experimental branches, and commit logs
  • Model training logs and evaluation metrics captured using MLflow, documenting input parameters, outputs, and performance results
  • Meeting notes and technical design documentation recorded in Notion, capturing design decisions, testing plans, and hypothesis discussions
  • Contracts, invoices, and pay runs managed through Xero

How BlueRock Helped

An abstract landscape metaphorically depicting a journey and foresight

ClarityB1 engaged BlueRock’s Grants & Incentives team to help ensure their R&D claim was fully compliant and robust. The BlueRock team: 

  • Established eligibility of the company to the program 
  • Advised the leadership team on the scope of the R&D Tax Incentive and its application to AI software 
  • Reviewed existing records and guided improvements to documentation practices 
  • Identified and validated the company’s core and supporting R&D activities 
  • Prepared a detailed and technically sound application to AusIndustry 
  • Calculated eligible expenditure, ensuring correct apportionment and treatment of costs 
  • Issued a tax advice letter for the company’s directors, outlining the basis for the claim 
  • Prepared the R&D schedule for the company’s tax return 

Case Study 2: Development of an ML Model for Concrete Defect Detection in Motorsport Racetracks 

An abstract landscape metaphorically depicting a journey and foresight

This case study is a fictional amalgamation of the work of real BlueRock clients.

Business Background and Objectives

TrackScan Technologies is a Sydney-based company developing an automated inspection platform for racetrack maintenance and safety teams. Their core product is an ML model designed to analyse high-resolution images captured by specialised drone or vehicle-mounted cameras and provide an automated severity and classification score for concrete defects such as cracks, spalling, and alkali-silica reaction (ASR) damage. 

The platform aims to assist track engineers in triaging maintenance needs and improving the consistency and speed of track surface assessment. Key functionalities include: 

  • Rapid assessment of defect type and severity from uploaded high-resolution track surface images 
  • Highlighting image regions that contain defects to guide ground crews 
  • Tracking the growth and change of specific defect areas over time from sequential inspection runs 

The company's R&D objectives have focused on: 

  • Developing a reliable and robust Convolutional Neural Network capable of accurately classifying different defect types in diverse, real-world images (e.g., wet, dry, high-texture, low-texture surfaces) 
  • Addressing the challenge of data imbalance, where minor hairline cracks far outnumber severe structural defects (like deep spalling) in inspection datasets 
  • Testing whether their model could maintain high performance across images captured at different speeds, angles, and varying light/weather conditions (domain generalisation) 
  • Designing an efficient, real-time data pipeline to process the massive volumes of high-resolution image data generated during a typical track inspection run 
target with arrow in the centre icon

Core R&D Activities

Core Activity 1: Experimental development of a novel CNN architecture for robust defect feature extraction 

The team designed and tested a custom deep learning model architecture to improve the extraction of subtle, diagnostically relevant features (e.g., crack width, spalling depth, ASR 'map cracking' patterns) from the complex, textured racetrack images compared to standard architectures (e.g., VGG, EfficientNet). The experiment sought to determine if this new architecture could achieve a statistically significant higher Intersection over Union score for pixel-level defect segmentation than baseline models when applied to an independent test set comprising images from various international circuits and different inspection hardware.

Core Activity 2: Development and testing of a data augmentation and transfer learning strategy to mitigate data imbalance 

TrackScan Technologies tested various transfer learning and advanced data augmentation techniques (such as geometric distortions mimicking camera shake, texture and colour variation simulating water and oil stains) to determine the optimal strategy for improving model training efficiency and prediction accuracy for the rare but critical severe defect classes. The uncertainty lay in identifying which combination of techniques would effectively synthesise new data points and rebalance the dataset without introducing artifacts that lead to spurious correlations or misclassification of non-defect track features like painted lines. 

loading icon

Supporting R&D Activities

Supporting Activity 1: Curation, standardisation, and annotation of track surface image datasets

Before experiments could be run, the team built tools to ingest, clean, and normalise the ultra-high-resolution images from partner racetracks, ensuring consistency in resolution, file format, and GPS metadata. This involved working with expert track engineers and civil materials scientists to develop and apply a standardised segmentation protocol for defect boundaries and classification labels (e.g., hairline crack vs. fatigue crack vs. spalling). While the annotation and cleaning were not experimental, they were directly related to and necessary for the core activities involving model training and testing. 

Supporting Activity 2: Development of internal testing environment and MLOps infrastructure 

The team designed a secure, high-performance, cloud-based environment to isolate and run large-scale experimental models, specifically to manage the complex training jobs required for deep learning on multi-gigapixel track images. This infrastructure included automated tools for GPU resource allocation, hyperparameter tuning, model version control (MLflow), and supporting the traceability of the image data and associated environmental conditions (e.g., ambient temperature, humidity) used during testing. This setup was developed specifically to support the core R&D experiments. 

Records Maintained

  • Weekly technical logs recorded in Airtable by engineering staff, detailing time spent on core and supporting R&D tasks, annotated with experiment IDs and activity categories 
  • Experiment tracking and version control managed using Weights & Biases, capturing model architecture changes, training metrics (e.g. IoU, precision, recall), and data augmentation configurations across experimental runs 
  • Image dataset annotations and revisions stored in CVAT, with audit trails of contributor input, label versions, and segmentation schema updates 
  • Cloud training environment logs automatically captured through AWS, including hardware utilisation, runtime parameters, and model artefacts tied to specific Git commit hashes 
  • Design rationales, test plans, and post-experiment evaluations documented in Confluence, including links to experiment dashboards, internal peer reviews, and data validation outcomes 
  • Financial records and contractor invoices managed through MYOB, tagged against R&D cost centres aligned with eligible experimental activity streams 

Why Specialist Expertise Matters when Claiming R&D 

For companies within the software and AI industry, the R&D Tax Incentive offers a powerful way to recoup development costs and accelerate innovation. However, with regulatory requirements for the program frequently changing and scrutiny of AI claims increasing, demonstrating genuine technical uncertainty and eligibility in this field is more challenging than ever.  

Specialist guidance from trusted advisors such as BlueRock can make a critical difference at every stage of your R&D journey, helping to navigate the complex and shifting rules.

What You Gain From Engaging a Trusted Advisor

Advisors have deep technical understanding of the nuances of the program, and know how to frame your innovation in line with legislation. By engaging a specialised advisor, you can have confidence that your AI or software projects meets the R&D Tax Incentive’s strict technical criteria.  

As expert advisors we can help you interpret evolving regulations, structure your documentation, and reduce the risk of errors, audits, or rejected claims, so you can move forward with certainty. 

With in-depth knowledge of the rules specific to software and AI, as expert advisors we can identify all eligible activities and costs, ensuring you claim the maximum benefits you're entitled to, without overstepping compliance boundaries.  

This means you won’t leave valuable incentives on the table or face penalties from misinterpreting the fine print. 

Navigating the R&D claim process is complex and time-consuming, especially for technical teams used to building products, not writing detailed applications.  

By handing over the claim preparation, you free up your team to concentrate on innovation and commercial impact, instead of getting bogged down in forms and deadlines. 

When to Engage an Advisor

target with arrow in the centre icon

Early scoping

Before commencing your project or committing significant resources, to determine which planned activities are genuinely eligible. 

tools settings icon

During project delivery

To ensure ongoing documentation aligns with regulatory requirements and to adapt processes in response to changing business needs. 

arrow pointing to a curved vertically oriented line on the right icon

At registration and claim preparation

When compiling technical information and financial data for submission, and before lodging your Company Tax Return. 

The BlueRock Advantage

eye preview icon

An eye for detail 

We take a thorough, hands-on approach to preparing R&D claims, spending time on the small details so your team doesn’t have to. Our process is designed to minimise disruption while capturing the information needed to support a well-documented and defensible claim.

"i" icon with a circular border icon

Relatable, knowledgeable team 

Our Grants and Incentives team is made up of specialists with deep experience across science, technology, and commercial fields. This allows us to translate your technical concepts into clear, compelling applications that speak directly to assessors. 

heart icon with circular border

A holistic approach 

As a multidisciplinary firm, BlueRock unites specialists in R&D, accounting, legal, digital, and commercial fields - all under one roof. This integrated approach ensures every aspect of your claim is expertly managed, from project scoping through to tax strategy and compliance. Whatever your question or concern, we have the right expert on hand to support you.

hand reaching out for a floating heart icon

Relationships come first

We prioritise open, supportive partnerships with our clients, working closely with founders and technical teams. Your goals are our goals, and we go the extra mile to help you succeed. 

With BlueRock as your advisor, you gain a trusted partner committed to transparency, technical depth, and outstanding client outcomes, so you can innovate boldly, knowing your R&D investment is protected and supported at every stage.

Let's Fetch You Some Funding

With BlueRock as your advisor, you gain a trusted partner committed to transparency, technical depth, and outstanding client outcomes, so you can innovate boldly, knowing your R&D investment is protected and supported at every stage.

arrow icon

get in touch

header image of a human with an old computer as their head doing work on a newer computer/laptop.

AI & the R&D Tax Incentive

A Practical Guide

How AI Development Engages with Australia’s R&D Tax Incentive

There aren’t many fields of business still untouched by artificial intelligence (AI) today. From agriculture to mining, healthcare to defence, and marketing to financial services, AI is transforming established practices by automating processes, generating content and improving decision-making. While AI as a research field has existed for decades, the public release of advanced foundation models such as ChatGPT in November 2022, along with other generative and agentic AI systems, has accelerated both the pace of development and its adoption across sectors. As these tools move from theory and research into real-world applications, Australian businesses are racing to build, integrate and adapt them—seeking competitive advantage in what has become a modern-day gold rush of technological innovation.

Contents

  1. 01

    Practical Guide

  2. 02

    Case Study 1: AI-Powered SaaS Tool for Hospital Operations

  3. 03

    Case Study 2: Development of an ML Model for Concrete Defect Detection in Motorsport Racetracks

  4. 04

    Why Specialist Expertise Matters When Claiming R&D

AI & the R&D Tax Incentive

A Practical Guide

Purpose of This Guide

This guide is for Australian companies or foreign corporations working on AI projects in Australia who want to:

  • understand how the R&DTI applies specifically to AI development
  • correctly identify and document eligible core and supporting activities
  • avoid common errors in claim preparation, and
  • maximise the benefit of the incentive while remaining compliant with legislative requirements

Throughout this guide, we reference examples, case studies, and official guidance materials to provide context. We also draw on real-world scenarios to illustrate how AI companies can navigate the line between eligibility and ineligibility, especially when using off-the-shelf models, APIs, or datasets.

You’ll learn how these activities interact with the R&DTI framework, what qualifies, what does not, and how to frame your work to reflect the genuine R&D it may contain.

Readers should not rely on this guide as formal advice. It is intended for general information only and is not a substitute for financial, legal or tax advisory services. Advice you rely on to make important commercial, legal and tax decisions should always be tailored to your specific circumstance. Companies should review the relevant legislation, including Division 355 of the Income Tax Assessment Act 1997, and seek professional guidance to ensure their R&D claims are compliant

Why the R&D Tax Incentive Matters

To support this kind of innovation, the Research and Development Tax Incentive (R&DTI) offers a tax offset to eligible Australian companies that conduct experimental R&D activities. The incentive is designed to reduce the cost of developing new or improved products, processes, devices, or services, including those involving AI. It does this by offering a tax offset, and for some companies in tax losses, a tax refund.

Companies can access between 33.5% and 48.5% of their eligible R&D expenses back via this program. The exact refund portion depends on certain conditions.

The R&DTI encourages companies to invest in activities that involve technical uncertainty, require systematic experimentation, and have the purpose of generating new knowledge—exactly the type of work undertaken in much of today’s AI development.

So, is AI development eligible?

That’s the key question, and the short answer is: it depends.

The R&DTI framework is built on core principles that every claimant must understand:

  • core and supporting R&D activities
  • unknown outcomes and technical uncertainty
  • systematic progression of work
  • hypotheses, experiment, evaluation and conclusions

Many AI projects can meet these criteria, particularly those involving novel model development, advanced tuning, integration in unpredictable environments, or unexplored use cases. However, others that focus purely on deploying existing models or routine automation may not.

We use the term “AI” broadly to cover a range of subfields and methods, including:

  • machine learning (ML)
  • deep learning
  • natural language processing (NLP)
  • large language models (LLMs)
  • computer vision
  • reinforcement learning
  • generative and agentic systems, and
  • related intelligent or adaptive systems

We aim to help you understand how these activities interact with the R&DTI framework—what qualifies, what doesn’t, and how to frame your work to reflect the genuine R&D it may contain.

Common Pitfalls in AI R&D Claims

As a self-assessment program, the R&D Tax Incentive places responsibility on businesses to interpret the eligibility criteria and correctly assess their own AI activities. This creates the potential for misinterpretation and error.

This guide highlights the most common pitfalls encountered in AI-related claims, so you can make informed decisions and avoid costly mistakes.

Common Errors in AI Claims

Failing to maintain contemporaneous documentation that supports the AI experimentation and associated expenditure

cross with a circle border icon

Seeking to claim the development of AI-powered automation, dashboards or chatbots built for internal business processes (such as HR, finance or compliance)

cross with a circle border icon

Failing to demonstrate that you developed a new AI tool or simply demonstrating that you used AI tools in the way they are intended to be used. This can include adapting and integrating off-the-shelf AL/ML solutions to suit your workflow without significant experimental development, performing basic prompt engineering or fine-tuning of pre-built LLMs

cross with a circle border icon

Basic model retraining where expected outcomes are already well understood

cross with a circle border icon

Confusing routine development and incremental improvements as innovation

cross with a circle border icon

Claiming AI and software development work performed overseas by offshore or outsourced teams 

cross with a circle border icon

Claiming R&D in one entity, while recording ownership of the resulting AI models, algorithms, or data assets in a different group company or IP-holding entity

Unlikely to be Core R&D Activities

  • bug fixing 
  • UI changes 
  • UAT 
  • data cleansing, mapping and migration 
  • routine feature engineering  
  • ongoing system maintenance   

Practical Guide

Understanding the R&D Tax Incentive Program

The R&DTI is Australia’s core mechanism for encouraging business investment in experimental R&D. It is administered jointly by The Department of Industry Science and Resources (DISR) and the Australian Taxation Office (ATO).

The program offers a tax offset for eligible R&D expenditure incurred by R&D entities. The incentive is intended to reduce the financial burden of genuine innovation, supporting activities that are uncertain, experimental and aimed at generating new knowledge, not merely routine improvements or technical maintenance.

Companies with aggregated turnover under $20 million may receive a refundable tax offset of 43.5 % of eligible expenditure (or 48.5% in some circumstances).

Companies with aggregated turnover of $20 million or more are eligible for a non-refundable offset, which will fall between 33.5% and 46.5%, depending on the proportion of R&D spend to total expenditure. This measure is known as R&D intensity.

These rates mean that for each dollar spent on eligible R&D, a company can reduce its tax burden. Loss making small companies can also receive a refund – which makes this program hugely popular among start-ups and scale-ups.

To qualify, R&D activities must meet key criteria:

They must involve a core R&D activity That is, one where the outcome is not known or readily determinable in advance based on existing knowledge or experience.

target icon

The work must involve experiments, with a systematic progression from hypothesis through to testing, observation and evaluation.

target icon

The purpose must be to generate new knowledge (either theoretical or applied) that will ultimately contribute to improved or novel products, processes, services or materials.

target icon

Supporting R&D activities, which are those undertaken for the purpose of supporting the core activity, may also be eligible.

Eligible R&D Expenses

Eligible R&D expenditure includes costs that are directly related to conducting your core and supporting R&D activities. Usually these will be:

Salary and wages for employees directly engaged in R&D, including software engineers and data scientists

dollar sign with circle border icon

Payments to contractors where the work relates to R&D and is conducted on behalf of the company

dollar sign with circle border icon

“Other” expenses, which can include rent, utilities, cloud hosting, and software licences

dollar sign with circle border icon

Depreciation on assets like computer and office equipment

dollar sign with circle border icon

Payments to registered research service providers, including universities or CSIRO

dollar sign with circle border icon

Feedstock, which is consumables used up in R&D, and is less common in the AI fields

Eligible AI Businesses

To access the R&D Tax Incentive, a business must first meet the basic eligibility requirements. Only Australian resident companies incorporated under Australian law (or foreign companies with a permanent establishment in Australia) can claim.

Sole traders, partnerships, and trusts (except public trading trusts with a corporate trustee) are not eligible.

Beyond entity type, the business must be conducting eligible R&D activities. For AI businesses, eligible activities may include developing novel model architectures, adapting existing algorithms to non-trivial new domains, designing new feature extraction or data labelling techniques where standard approaches fail, optimising model performance under constrained or variable conditions, or investigating unexpected behaviour in model generalisation.

Routine use of off-the-shelf models or tools, deployment of existing APIs, or software customisation for internal use are unlikely to qualify unless they form part of a broader experimental activity with technical uncertainty.

Have you come up with a brilliant and elaborate prompt for a GPT model? Unfortunately, that alone is unlikely to qualify. Using a pre-trained model in the way it was designed to be used typically does not involve any scientific or technical uncertainty, nor does it require systematic experimental evaluation, no matter how sophisticated the prompt. Without modifying, adapting, or assessing the model in a way that addresses a technical unknown, it's unlikely to meet the R&D criteria.

Eligible projects can span various fields, as long as the R&D activities meet the legislative definition. We are seeing successful claims in agriculture, logistics, finance, insurance, marketing, health and education, among other fields.

Core R&D Activities

Core activities in AI typically involve experimental work undertaken to resolve a specific technical uncertainty that cannot be resolved with existing knowledge or standard practice. The work must follow a systematic approach, forming hypotheses, conducting tests, collecting observations, and drawing conclusions.

Examples of core AI R&D activities might include:

  • Testing whether a novel model architecture (e.g. transformer variant, diffusion model, graph neural network) can achieve targeted performance on a previously unmodelled or highly variable dataset 
  • Investigating unexplained model behaviours, such as bias amplification, performance degradation, or prediction instability, where known methods (e.g. fine-tuning, calibration, assembling) do not provide a reliable fix 
  • Designing a custom embedding, data representation or attention mechanism to integrate and reason over multi-source or multi-modal inputs (e.g. combining text, sensor data, and time-series signals) 
  • Evaluating reinforcement learning or other agent-based techniques in environments where input states are partially observable, highly dynamic, or sparsely labelled 
  • Experimenting with privacy-preserving techniques, such as synthetic data generation, differential privacy, or secure model deployment, to determine whether performance can be maintained without exposing sensitive data 

What qualifies these as core R&D is not just the complexity of the task, but the presence of technical uncertainty, where an experienced and qualified AI professional could not predict whether or how a desired outcome could be achieved. We also look at the genuine need for structured, hypothesis-driven experimentation to resolve that uncertainty.

Supporting R&D Activities

Supporting activities are those that are directly related to a core R&D activity. In some cases, such as where the activity produces goods or services, or involves excluded software development, they must also be conducted for the dominant purpose of supporting the core R&D.

Examples of supporting activities in AI might include:

  • Pre-processing data or synthesising training datasets used in an experiment
  • Background research on publications, industry best practices, or learning to use new tools
  • Building test infrastructure or tuning model parameters prior to core testing
  • Running baseline models to establish control conditions

These tasks are not experimental in themselves, but they are necessary to conduct the core R&D. Importantly, they would not have been undertaken at all or in the same way if not for the core activity.

Correctly identifying and separating core and supporting activities is essential for both eligibility and compliance. Projects that lack a clearly defined core activity, or where all work is routine, are unlikely to qualify under the program.

Eligible AI Development Activities

AI development activities that may qualify for the R&D Tax Incentive are those that address genuine technical uncertainty, where results cannot be known in advance, and established techniques do not offer clear solutions.

These eligible activities typically involve:

  • Tackling problems where the outcomes are not predictable using current methods or industry best practices
  • Designing and training models where the relationship between variables and results is non-deterministic, requiring systematic testing and analysis
  • Developing, experimenting with, or applying novel algorithms, model architectures, or custom feature engineering techniques to new or complex challenges
  • Investigating how different data types, algorithms, and operating environments behave and interact, especially in scenarios with unknown or unpredictable results

Examples of eligible activities may include: 

  • Developing a brand new algorithm to spot rare events in medical images 
  • Creating a custom neural network to make sense of data from many different types of sensors 
  • Designing new ways to clean and connect messy, incomplete data 
  • Testing how your model works in totally new situations, where you can’t predict if it will succeed 

Understanding Ineligible Activities for the R&D Tax Incentive

While the R&D Tax Incentive is designed to support genuine innovation, not every AI project will qualify. The program specifically excludes certain types of work, particularly where the activities are routine, administrative, or do not involve significant technical uncertainty. Understanding these boundaries early helps ensure your claim is focused, compliant, and unlikely to face issues under review.

cross with circle border icon

Internal Use Software

Software developed mainly for a business’s own internal administration or for use by an affiliate is an excluded activity under the R&D Tax incentive. This includes AI-powered tools designed solely to support functions like payroll, HR, accounting, compliance reporting, and internal workflow management. 

Examples of excluded activities include:  

  • Deploying a chatbot to handle HR queries 
  • Designing an AI agent bot to onboard employees 
  • Using an algorithm to automate invoice processing 
  • Implementing AI-driven dashboards for monitoring performance metrics, KPIs or workflow status 
cross with circle border icon

Digital Process Automation

Digital process automation refers to using software (including AI) to streamline, script, or automate steps in existing business processes. While these solutions are helpful and often use AI in a unique business sense, they are generally implemented using established techniques and tools, and the outcomes are predictable, excluding them from being eligible for the R&DTI program.  

Examples of digital process automation activities likely to be excluded include:  

  • integrating existing off the shelf SaaS tools using APIs 
  • using established LLM products for their intended purpose (i.e. prompt engineering) 
  • using low-code/no-code platforms  
cross with circle border icon

Other Exclusions 

In addition to routine software development and automation, the following activities are not eligible as core activities under the R&D Tax Incentive: 

  • market research, market testing, market development, or sales promotion. This includes consumer surveys 
  • commercial, legal and administrative aspects of patenting, licensing or other activities  
  • activities associated with complying with statutory requirements  
  • research in social sciences, arts or humanities 

While these exclusions are less common in AI development, it is important to be aware of them for projects that may intersect with these areas. 

Still Not Sure if Your Activities Are Eligible? 

R&D tax legislation can be complex and the boundary between eligible and ineligible activities is not always obvious. If you are uncertain about your project’s eligibility, consulting a specialist advisor experienced in the software and AI sector (such as BlueRock) can help you clarify your position.  

Documentation and Record Keeping Requirements 

For the purposes of the R&D tax incentive, maintaining evidence and good documentation is not an afterthought or just a best practice. Recordkeeping is a mandatory requirement that directly determines your eligibility for the R&D Tax Incentive. If you can’t produce the right records, your claim may be denied or reduced. 

All records must be contemporaneous - meaning they need to be created as the work and spending happen, not produced later in response to a review or audit. 

If you’re new to the program, this might sound daunting, but it doesn’t have to be. With the right processes and support, effective record-keeping can be straightforward and built into your workflow from day one. 

Evidence for Core R&D Activities

For core activities, you must retain evidence that (1) a genuine technical uncertainty existed; and (2) the experimental work was undertaken to investigate this uncertainty using a hypothesis-driven approach.  

This documentation should demonstrate how your experiments were structured, executed, and evaluated. 

Examples of suitable records: 

  • Project briefs or technical design documents Confluence project pages outlining the technical problem 
  • Research literature reviews Saved PDFs or annotated summaries referencing peer-reviewed papers or arXiv preprints showing the knowledge gap 
  • Experiment plans and protocols Jupyter Notebooks or markdown files in your codebase defining model parameters, data splits, and hypotheses. 
  • Source code repositories and version control logs GitHub or GitLab commit histories, with branches and pull requests reflecting variations of model development and experimental runs. 
  • Model training logs or output files TensorBoard visualisations, MLflow tracking runs and CSV exports showing metrics such as F1 score, accuracy, precision, recall, confusion matrices for each experiment cycle. 
  • Analysis reports PowerPoint presentations, Google Slides, or internal Notion pages summarising outcomes, failures, and evolution of the investigation. 
document icon with check mark in the centre

Evidence for Supporting Activities

Supporting activities require documentation showing how each task directly relates to, and enables the core R&D activity, as well as confirmation that the work was actually performed. 

Examples of suitable records: 

  • Task management recordsJira tickets, Asana tasks, or Trello cards linking activities such as data cleaning or feature engineering to specific core R&D experiments.  
  • Meeting notes or planning documentsNotes in Notion, Confluence or Google Docs recording discussions about infrastructure setup. 
  • Chat histories reflecting technical collaboration and decisionsSlack, Microsoft Teams messages or email chains with correspondence.  
  • Maintenance logsRecords from system dashboards, DevOps logs or custom checklists that track updates and fixes in response to R&D testing requirements.  
document icon with check mark in the centre

Evidence for R&D expenditure

For each expense claimed, you must be able to substantiate that it was incurred, and it has a direct connection to an R&D activity. 

Examples of suitable records: 

  • Payroll and timesheet records Entries in systems such as Employment Hero, Deputy, Excel, Harvest, or Toggl, mapping staff hours directly to R&D sprints or experiment cycles. 
  • Invoices Dated and itemised documents specifying the amount payable, payment terms, and a clear description of the work performed, directly linked to experimental milestones or R&D activities. 
  • Contractor Agreements Signed and dated agreements that clearly outline the nature, scope, and relevance of contract work to core or supporting R&D activities. 
  • Bank statements Official bank feeds, statements, or remittance slips showing actual payments made for eligible R&D expenses. 
  • Calculations and logbooks Spreadsheets, project diaries, allocation workbooks, or digital logbooks that support how shared costs (such as cloud services or staff time) are reasonably apportioned to R&D activities.

Case Study 1: AI-Powered SaaS Tool for Hospital Operations

An abstract landscape metaphorically depicting a journey and foresight

This case study is a fictional amalgamation of the work of real BlueRock clients.

Business Background and Objectives 

ClarityB1 is a Melbourne-based SaaS company developing an AI-powered operational intelligence platform designed specifically for use by hospitals. Their product aims to unify complex, disparate datasets into a secure environment accessible via a natural language interface, spanning patient admissions, staff rosters, clinical activity, facilities management, and procurement. Their business model is licensing the product to hospitals to use as a management tool. 

Through this interface, staff ranging from administrators to clinicians and executive teams can ask context-specific operational questions, such as: 

“Which wards are most likely to require agency staff in the next fortnight?” 

“What was the average turnaround time on critical maintenance requests during Q1?” 

“How many nursing shifts were missed or rescheduled over Christmas last year?” 

“Where are the biggest variances between forecast and actual bed usage in the past three weeks?” 

The company’s R&D objectives have focused on: 

  • Developing a reliable and explainable AI system capable of interpreting ambiguous queries across different user types 
  • Safely modelling operational scenarios without compromising patient privacy 
  • Testing whether their model could predict staffing load variances during periods of abnormal demand (e.g. December–January holiday season) 
  • Designing a data ingestion pipeline that could operate securely across multiple hospital systems, each with different formats, levels of completeness and governance rules 
target with arrow in the centre icon

Core R&D Activities

Core Activity 1: Development and testing of a generalisable query engine across variable clinical datasets 

The team designed a novel approach to parse natural language queries from hospital staff and resolve them into structured representations across heterogeneous datasets. The experiment sought to determine whether a custom intent-classification and entity-mapping model could perform consistently across both structured (e.g. rosters, schedules) and semi-structured data (e.g. free-text reports, maintenance logs), in an environment with variable schema definitions and inconsistent terminology.

Core Activity 2: Experimental development of a privacy-preserving data modelling framework 

ClarityB1 tested whether it could forecast resource loads, particularly nursing staff requirements, without accessing or exposing personally identifiable information. This involved developing and evaluating new methods for injecting synthetic or obfuscated data into training workflows while maintaining model fidelity. The outcome of this activity could not be determined in advance due to the unknown effects of these techniques on prediction accuracy and model drift. 

loading icon

Supporting R&D Activities

Supporting Activity 1: Integration and standardisation of clinical and operational data sources

Before experiments could be run, the team built tools to ingest, clean and transform disparate datasets from partner hospitals. While this work was not experimental in itself, it was directly related to and necessary for the core activities involving model development and testing.

Supporting Activity 2: Development of internal testing environment and MLOps infrastructure

The team designed a secure, cloud-based environment to isolate and run experimental models. This environment allowed for version control, rollback, and auditing of data access, and supported traceability during testing. This infrastructure was developed specifically to support core R&D experiments.

R&D Expenditure 

ClarityB1 claimed the following eligible R&D costs: 

Salaries

  • Head of Engineering: 1,000 hours (55% of a $200,000 package)
  • Data Scientist: 1,635 hours (90% of a $175,000 package)
  • Software Developers (3): 2,180 hours total (average of 40% of their $160,000 salaries)

Contractor

Fee of $40,000 paid to an external security and cloud computing specialist to advise on secure model deployment and validate the synthetic data environment used in privacy-preserving R&D experiments.

Other Expenditure

  • Rent: 25% of annual office rent attributed to R&D employees 
  • Hosting and cloud services: $15,000 for the secure R&D environment used for model development and testing 

The total R&D expenditure was $529,500, and the company was in a tax loss position, which meant it claimed the full refundable offset for a cash payment of $230,333. 

Records Maintained

ClarityB1 kept a set of contemporaneous (i.e. completed in real time) records, including:

  • Time records completed in Google Sheets every Friday by technical staff to track R&D effort
  • Task management records maintained in JIRA, with R&D activities tagged and linked to hypotheses
  • Source code repositories managed in GitHub, with version histories, experimental branches, and commit logs
  • Model training logs and evaluation metrics captured using MLflow, documenting input parameters, outputs, and performance results
  • Meeting notes and technical design documentation recorded in Notion, capturing design decisions, testing plans, and hypothesis discussions
  • Contracts, invoices, and pay runs managed through Xero

How BlueRock Helped

An abstract landscape metaphorically depicting a journey and foresight

ClarityB1 engaged BlueRock’s Grants & Incentives team to help ensure their R&D claim was fully compliant and robust. The BlueRock team: 

  • Established eligibility of the company to the program 
  • Advised the leadership team on the scope of the R&D Tax Incentive and its application to AI software 
  • Reviewed existing records and guided improvements to documentation practices 
  • Identified and validated the company’s core and supporting R&D activities 
  • Prepared a detailed and technically sound application to AusIndustry 
  • Calculated eligible expenditure, ensuring correct apportionment and treatment of costs 
  • Issued a tax advice letter for the company’s directors, outlining the basis for the claim 
  • Prepared the R&D schedule for the company’s tax return 

Case Study 2: Development of an ML Model for Concrete Defect Detection in Motorsport Racetracks 

An abstract landscape metaphorically depicting a journey and foresight

This case study is a fictional amalgamation of the work of real BlueRock clients.

Business Background and Objectives

TrackScan Technologies is a Sydney-based company developing an automated inspection platform for racetrack maintenance and safety teams. Their core product is an ML model designed to analyse high-resolution images captured by specialised drone or vehicle-mounted cameras and provide an automated severity and classification score for concrete defects such as cracks, spalling, and alkali-silica reaction (ASR) damage. 

The platform aims to assist track engineers in triaging maintenance needs and improving the consistency and speed of track surface assessment. Key functionalities include: 

  • Rapid assessment of defect type and severity from uploaded high-resolution track surface images 
  • Highlighting image regions that contain defects to guide ground crews 
  • Tracking the growth and change of specific defect areas over time from sequential inspection runs 

The company's R&D objectives have focused on: 

  • Developing a reliable and robust Convolutional Neural Network capable of accurately classifying different defect types in diverse, real-world images (e.g., wet, dry, high-texture, low-texture surfaces) 
  • Addressing the challenge of data imbalance, where minor hairline cracks far outnumber severe structural defects (like deep spalling) in inspection datasets 
  • Testing whether their model could maintain high performance across images captured at different speeds, angles, and varying light/weather conditions (domain generalisation) 
  • Designing an efficient, real-time data pipeline to process the massive volumes of high-resolution image data generated during a typical track inspection run 
target with arrow in the centre icon

Core R&D Activities

Core Activity 1: Experimental development of a novel CNN architecture for robust defect feature extraction 

The team designed and tested a custom deep learning model architecture to improve the extraction of subtle, diagnostically relevant features (e.g., crack width, spalling depth, ASR 'map cracking' patterns) from the complex, textured racetrack images compared to standard architectures (e.g., VGG, EfficientNet). The experiment sought to determine if this new architecture could achieve a statistically significant higher Intersection over Union score for pixel-level defect segmentation than baseline models when applied to an independent test set comprising images from various international circuits and different inspection hardware.

Core Activity 2: Development and testing of a data augmentation and transfer learning strategy to mitigate data imbalance 

TrackScan Technologies tested various transfer learning and advanced data augmentation techniques (such as geometric distortions mimicking camera shake, texture and colour variation simulating water and oil stains) to determine the optimal strategy for improving model training efficiency and prediction accuracy for the rare but critical severe defect classes. The uncertainty lay in identifying which combination of techniques would effectively synthesise new data points and rebalance the dataset without introducing artifacts that lead to spurious correlations or misclassification of non-defect track features like painted lines. 

loading icon

Supporting R&D Activities

Supporting Activity 1: Curation, standardisation, and annotation of track surface image datasets

Before experiments could be run, the team built tools to ingest, clean, and normalise the ultra-high-resolution images from partner racetracks, ensuring consistency in resolution, file format, and GPS metadata. This involved working with expert track engineers and civil materials scientists to develop and apply a standardised segmentation protocol for defect boundaries and classification labels (e.g., hairline crack vs. fatigue crack vs. spalling). While the annotation and cleaning were not experimental, they were directly related to and necessary for the core activities involving model training and testing. 

Supporting Activity 2: Development of internal testing environment and MLOps infrastructure 

The team designed a secure, high-performance, cloud-based environment to isolate and run large-scale experimental models, specifically to manage the complex training jobs required for deep learning on multi-gigapixel track images. This infrastructure included automated tools for GPU resource allocation, hyperparameter tuning, model version control (MLflow), and supporting the traceability of the image data and associated environmental conditions (e.g., ambient temperature, humidity) used during testing. This setup was developed specifically to support the core R&D experiments. 

Records Maintained

  • Weekly technical logs recorded in Airtable by engineering staff, detailing time spent on core and supporting R&D tasks, annotated with experiment IDs and activity categories 
  • Experiment tracking and version control managed using Weights & Biases, capturing model architecture changes, training metrics (e.g. IoU, precision, recall), and data augmentation configurations across experimental runs 
  • Image dataset annotations and revisions stored in CVAT, with audit trails of contributor input, label versions, and segmentation schema updates 
  • Cloud training environment logs automatically captured through AWS, including hardware utilisation, runtime parameters, and model artefacts tied to specific Git commit hashes 
  • Design rationales, test plans, and post-experiment evaluations documented in Confluence, including links to experiment dashboards, internal peer reviews, and data validation outcomes 
  • Financial records and contractor invoices managed through MYOB, tagged against R&D cost centres aligned with eligible experimental activity streams 

Why Specialist Expertise Matters when Claiming R&D 

For companies within the software and AI industry, the R&D Tax Incentive offers a powerful way to recoup development costs and accelerate innovation. However, with regulatory requirements for the program frequently changing and scrutiny of AI claims increasing, demonstrating genuine technical uncertainty and eligibility in this field is more challenging than ever.  

Specialist guidance from trusted advisors such as BlueRock can make a critical difference at every stage of your R&D journey, helping to navigate the complex and shifting rules.

What You Gain From Engaging a Trusted Advisor

Advisors have deep technical understanding of the nuances of the program, and know how to frame your innovation in line with legislation. By engaging a specialised advisor, you can have confidence that your AI or software projects meets the R&D Tax Incentive’s strict technical criteria.  

As expert advisors we can help you interpret evolving regulations, structure your documentation, and reduce the risk of errors, audits, or rejected claims, so you can move forward with certainty. 

With in-depth knowledge of the rules specific to software and AI, as expert advisors we can identify all eligible activities and costs, ensuring you claim the maximum benefits you're entitled to, without overstepping compliance boundaries.  

This means you won’t leave valuable incentives on the table or face penalties from misinterpreting the fine print. 

Navigating the R&D claim process is complex and time-consuming, especially for technical teams used to building products, not writing detailed applications.  

By handing over the claim preparation, you free up your team to concentrate on innovation and commercial impact, instead of getting bogged down in forms and deadlines. 

When to Engage an Advisor

target with arrow in the centre icon

Early scoping

Before commencing your project or committing significant resources, to determine which planned activities are genuinely eligible. 

During project delivery

To ensure ongoing documentation aligns with regulatory requirements and to adapt processes in response to changing business needs. 

At registration and claim preparation

When compiling technical information and financial data for submission, and before lodging your Company Tax Return. 

The BlueRock Advantage

An eye for detail 

We take a thorough, hands-on approach to preparing R&D claims, spending time on the small details so your team doesn’t have to. Our process is designed to minimise disruption while capturing the information needed to support a well-documented and defensible claim.

Relatable, knowledgeable team 

Our Grants and Incentives team is made up of specialists with deep experience across science, technology, and commercial fields. This allows us to translate your technical concepts into clear, compelling applications that speak directly to assessors. 

A holistic approach 

As a multidisciplinary firm, BlueRock unites specialists in R&D, accounting, legal, digital, and commercial fields - all under one roof. This integrated approach ensures every aspect of your claim is expertly managed, from project scoping through to tax strategy and compliance. Whatever your question or concern, we have the right expert on hand to support you.

Relationships come first

We prioritise open, supportive partnerships with our clients, working closely with founders and technical teams. Your goals are our goals, and we go the extra mile to help you succeed. 

With BlueRock as your advisor, you gain a trusted partner committed to transparency, technical depth, and outstanding client outcomes, so you can innovate boldly, knowing your R&D investment is protected and supported at every stage.

Let's Fetch You Some Funding

With BlueRock as your advisor, you gain a trusted partner committed to transparency, technical depth, and outstanding client outcomes, so you can innovate boldly, knowing your R&D investment is protected and supported at every stage.

get in touch