WANTED: A New Procurement Model for AI-powered Small Businesses
I’ve run a small software business for 12 years, and in that time I’ve been fortunate to have some huge (think billion-dollar) companies as customers.
There have been some real highs and lows over, and I’ve watched from the sidelines as these large organisations struggle to adapt to the increase pace of technical change. In particular, I’ve learnt what it’s like to be on the receiving end of a procurement process that I wouldn’t wish it on my worst enemy!
As I’ve been thinking recently about how AI will reshape business, I’ve started to realise that relationship between vendors and their customers will need to change in a variety of ways. The two most important of these are the perceived risks around technology (data) and people (vendor size).
Technical — Where is my data?
We launched as an internet-first company, offering a cloud-hosted service from Day 1. In the early 2010s, potential customers didn’t really want to touch cloud services, believing that it was inherently unsafe — most preferred on-premise solutions.
Suddenly this picture changed around 2015–16 as cloud became more mainstream as the clamour to move to the cloud suddenly shot up the management agenda. The few enterprise installations we had were quickly converted to hosted implementations and everybody breathed a sigh of relief.
It took several years, but the pendulum seems to be swinging back the other way… but perhaps not in the way you might expect.
Customers still don’t want enterprise installations. The difference today, is that their IT security teams have now woken up to the fact that they have systems on the cloud and that those systems potentially contain “sensitive data”, and have sprung into action to try to provide some oversight and risk management.
You can’t argue with the goal here; of course all firms must be more cyber-aware and should have a clear understanding of where and how their data is processed. The way they this is achieved, however, is painful for both parties. The software provider is given lengthy, vaguely-worded cybersecurity questionnaires, which attempt to measure their security against an ever-shifting set of security goalposts.
As a software vendor I only receive such questionnaires, and so I can’t comment on whether my customers really do find them an effective risk management tool… but as I vendor, I honestly find them very counterproductive:
- I’ve never seen a questionnaire calibrated for the confidentiality of data handled — the questions assume that everything is ultra-sensitive.
- I’ve never seen a questionnaire that ISN’T full of unexplained acronyms or defined terms (anything with Title Case).
- I’ve never seen a questionnaire suitable for dev teams of less than 100 people (given the number of checks, balances and escalations they expect).
- I’ve never seen a questionnaire that actually explains what the requirement is.
- I’ve never seen two questionnaires ask the same question in the same way.
- I’ve never been given a contact to help explain or clarify any questionnaire.
- I’ve never had any constructive feedback on the answers given.
- I’ve never had a questionnaire that took LESS than 3 full days to complete.
Not only does the status quo frustrate vendors, I’d suggest it doesn’t lead to good outcomes either; the vendor is aiming to meet a set of undocumented requirements and doesn’t get good feedback on where to improve. And reminding them of the option to move back to on-premise of course doesn’t work either; internal IT teams no longer want manage third-party software.
The arrival of AI will — of course — simply pour gasoline on this fire.
If — as customers increasingly expect — a vendor incorporates AI into its product, and that AI is provided by a third-party LLM (the only realistically economic approach), then where does their sensitive data go? If the vendor passes it to a the LLM, then how is it used? If it is used for training, can sensitive data be leaked to others?
Every prompt to that LLM suddenly becomes a leak risk
How can the vendor ever offer data security guarantees in their next cybersecurity questionnaires? What will be the (unarticulated) buyer requirements around this?
Blanket bans on AI are unlikely to work; this would lead directly to perceived product weakness and competitive disadvantage. And training a model in-house would be too complex, costly and time-consuming.
How procurement teams handle this is uncertain and may well depend on the functional use case.
One aspect that would help, however, is to convert the (painful) cybersecurity process above into a genuine two-way partnership where vendors and their customers develop a greater understanding of both requirements and solution.
- Buyers should more clearly classify their data and set requirements accordingly.
- One-way questionnaires (with no feedback) should be replaced by a solution presentations and a genuine two-way discussion about how data can be appropriately secured and handled.
- Customer and vendor should work in the spirit of a partnership to ensure cyber threats are identified and managed on an on-going, evolving basis as opposed to point-in-time report cards.
Wouldn’t that be nicer and lead to better outcomes for everybody?
People — How big is your company?
One of the questions that always seems to crop up during a product demo is how many staff our firm has. The questions always struck me as strange.
Why headcount and not profit? Possibly because most firms (even large ones) are not consistently profitable! The irony of course, is that it is excessive headcount that causes costs to spiral and for company’s to make a loss in the first place. In that case, a large headcount is an indicator of imprudent business decisions rather than strength.
To illustrate this point, let’s consider three (admittedly extreme) answers:
- “it’s only me”,
- “we are growing by 10% and hope to get to 1,000 by year end”,
- “we are reducing headcount by 25% per year”
Firstly, I don’t think anybody would disagree that buying software from a one-person company (#1) implies a significant key-man risk, and so there is little point dwelling on this item.
The growth story (#2) is the one that buyers expect to hear. A rapidly-growing headcount is — they assume — an indicator of business success, and it gives them a feeling that they will be well-supported going forward.
The third one is not a response you hear, as it potentially raises red flags. In the mind of the buyer (procurement), it suggests that the vendor is unprofitable or potentially running out of cash, or a bad place to work, or is in the process of winding up… All sorts of negative stories are imagined to fit that narrative.
As I’ve started to allude to above, however, headcount is often the reason why a company makes a loss, and so I really challenge the assumption that it is a point of strength. Indeed, if a company can demonstrate that it is actively reducing payroll costs while maintaining (or improving) it’s services, then surely #3 is a much better answer than #2?
Of course, AI has a great potential as an productivity booster that should allow vendors to achieve just that — to maintain or improve whilst slowly (and in a controlled fashion!) lower headcount over the medium-term. Indeed, my recent article concludes that it is the technical-led small to medium-sized vendors which are most likely to adopt AI in a way that allows them to do just that.
Well-run, AI-augmented companies of the future will be smaller than equivalent companies of today.
That’s a good thing, but it does require our mental models to change from viewing downsizing as a reason for concern, to being a reason to be proud.
Will the one-person company ever be OK? Well, there is definitely an argument that says a hyper-automated company can run almost autonomously, with AI being able to offer 24/7 customer support for example, and AI agents being able to complete tasks without human involvement. If that is true, then it suggests that successful companies can be very small indeed (and perhaps correspondingly extremely profitable!)… but the key-man risk of a single person remains an obstacle from a business continuity and succession planning perspective.
I predict that average firm size will drop sharply over the next 5 years, and service companies with more than a few hundred employees will become very rare indeed.
So there you have it — some thoughts and predictions on how IT and procurement risks may need to be upgraded for a world of AI.
What do you think? Are there other angles that you see changing? What are the biggest sales and procurement challenges in an AI-powered world? Let me know in the comments below.
I hope you found this article interesting. I’m trying to write 2–3 articles like this each month, but keeping motivation high can be tough if there’s no feedback. So if you like this, please consider clapping / sharing / commenting, and just let me know you are out there :-)