Over a year ago I elicited opinions on the best choice of a CRM for a small business from the LinkedIn community. This crowdsourcing helped inform a project I was leading for a small, but fast growth business that is a heavy user of Microsoft Office, in particular Outlook and Excel. The client had used Salesforce in the past, but not for sales, and were not keen on the complexity and cost. Integration with Outlook helped narrow the field of available CRMs to obviously include MS Dynamics and various Outlook add-ins. Following the review I recommended we went with Hubspot because:
There is little point me listing all the features, most of which will be common to most modern CRMs, so I will highlight a few points, many of which have both advantages and disadvantages:
Both the "Track Email" and "Log to CRM" options from the Outlook add-in work well, and the users like this.
Hubspot will go off to the internet to try to fill in some missing details (company details from domain names, LinkedIn lookups), and this is helpful. But sales people being sales people, the data will be minimal unless you create some regime (and incentives) for them to come back and tidy it up. Don't expect to get much insight or target your marketing from the contact data unless you do this, but this is a universal issue.
Adding new properties (fields) is easy, with a good selection of field types. I have a slight concern that creating new properties is too easy, and users may be tempted to do it without checking whether there is an existing field or functionality to meet their need, or they create a structure which meets an immediate need but is not normalised and quickly becomes unmanageable. This will need controlling, but again this is a universal issue.
The "out of the box" reporting is minimal. The "Dashboard" looks nice at first glance, but most of it is "crippleware" unless you subscribe to the paid version. However, the filters on the browse pages for the major tables (Company, Contacts, Deals), all work well, including filtering different stages of your sales pipelines. You can save named filters, so there isn't so much need for traditional reporting. With the necessary rights, you can download data into Excel.
One can record only a single level parent-child relationship between companies, i.e. a company can be a parent, or a child, but not both. There is no data sync between parent and child companies and no data will roll up to the parent company. This is sub-optimal, but is also something that it is very difficult to get right in any system without hideous complexity that baffles the ordinary user, and in this respect sums up where hubspot CRM sits: It does not have the functionality of Salesforce, but neither does it have the complexity and clutter. Most users will be up and running in minutes.
Initial data load is very easy, and companies and contacts can be updated via an upload if you want to go for a soft start. However, deals can be added but not updated by this route. If you want to do a bulk update of deals then you will need to use the API.
The ease of adding fields, and the possibilities opened up by the API, make it possible to extend the use of Hubspot's CRM. Whilst there is always likely to be some tailoring to fit to a particular business, this opens up the possibility of building out the workflows beyond just sales. If we can extend the deal record with a few extra fields such that production can then use it to record and control their business, then we leverage the existing data, and sales even get a view of the progress of jobs. If we can create the invoices in our accounting system from completed deals, then it can reduce re-keying within the company and incentivise sales to get their data right up-stream. There is a judgement call here in terms of how much can be moved into hubspot, and once more it does not have the facilities of Salesforce, but there will be enough for many small businesses to integrate their business beyond just sales and marketing.
We have used the API to report directly into Excel. Whilst this has saved the cost of the reporting module, we did it because there was some complexity in the reporting: customers mostly pre-pay annually for this particular business, and then draw-down individual projects against the pre-payments. We track these as two different flavour pipelines and then match it all up in Excel.
Having set off down this road, we now do lots more reporting via the API to Excel, and by providing hyperlinks back to hubspot, most of the sales users actually use the Excel reporting as an index to their work, and the sales management use it at both a summary and detail level.
There have been implementation issues, but these are mostly associated with organisational behaviour and would apply to any CRM implementation. I will cover them in a follow-up blog.
Some technical confidence and discipline will be required to set up default field choices, possibly add new fields, load data, and create filters etc. This would not be beyond competent users with a feel for data, of the type you might find in a modern marketing department. Finding someone who has done it before would accelerate the process as long as they can see your particular needs.
Utilising the API needs professional IT skills. If you can combine this with someone with business understanding and vision, this really opens up the possibility of integrating the CRM into your business, both to leverage the data, and also to build your workflows into and on top of the CRM. This is more of an investment, but provides a path for a small business to grow without losing control.
Hubspot CRM and sales are certainly working well for our client small business, and I am happy to recommend this path to other small and medium sized enterprises.
I hope this blog is helpful and I welcome comments; but of course if you are looking for more specific help with #Hubspot do get in touch - no sales pitch just genuine independent advice!
20 Jul 18.
We may be wrong to focus on our data being right, when we should be more concerned that it is the right data. There is a congruence here with efficiency and effectiveness - doing the thing right and doing the right thing. We can put lots of effort into honing our processes and interactions so that they are efficient, but this can be to no avail if we are not doing the right thing. All very obvious you might think, but the reality is many people have conceptual problems with that abstract stuff we call data and will ask people like me to check if their data is right and miss out on the more important question of whether they hold the right data. I recently examined a database for an application that had been in use for some years. The quality of the data on most measures was excellent. The curious incident was what was not there, amongst which were email addresses and mobile phone numbers.
The data protection principles tell us that data should be right in both ways. The right data comes first in principle 3: "Personal data shall be adequate, relevant and not excessive...", or as my wife might specify it: "enough, but not too much". Principle 4 tells us the data should be right: "Personal data shall be accurate and, where necessary, kept up to date". In some ways we are struggling with terminology, as we don't have words that necessarily sum up our different dimensions of rightness with a clarity similar to efficient and effective, but I'm going to use pertinence to talk about the right data, and accuracy to talk about the data being right.
We can measure, quantify and report on both efficiency and data completeness and accuracy, and plot ways to fill any gaps in the existing record. The questions of effectiveness and whether we are recording the pertinent data are much more open and difficult to quantify. Whilst there may be some things that are definitely wrong, it is difficult to be certain that we ever have it as good as it could be, but we can be pretty certain that the world will move on and invalidate both our accuracy and pertinence. Focusing on pertinence more that accuracy, we move from the certainty of analysing the data quality to business analysis where we will need to take account of the dynamism of data and the business, and differing frames of reference for different users.
Within our companies, what is considered pertinent is going to vary by department. A contact record with just an email address may be considered a valid lead, and if you measure your sales or marketing effort on new leads then you will probably have lots of these. In the business world we may be able to extract some information from the domain, but the marketer tasked with deriving meaningful insight from this data has a lot to do, and would prefer all fields to be populated with accurate data. An operations person might be uninterested in the age of your contact, but would like the address and postal code to be correct. They may also be interested in any references stored to other systems, so they can link the data to the corresponding data elsewhere. An accounts person might be interested in something as basic as whether this is a new or existing customer, and the high level of duplicates in many systems shows this is not as simple a question as it sounds. Of course, you can try to enforce some mandatory fields, but the side effect of this can just be bad data as users enter something to get past the mandatory field that they see as stopping them doing their job. Mostly, bad data is worse than no data, as we may otherwise make assumptions based on that bad data.
Even in well controlled systems we may have problems where we try to categorise data. Too few categories and we lose precision in our insight. Too many and we similarly lose precision as the users will pick either the first one or an overall bucket. Enabling users to add their own categories may lead to chaos, as once you are beyond 30 or 40 categories users will add their own rather than try to find the closest match. I've seen many thousands of job functions in a system. Our categorisation, like our overall data, needs to be pertinent.
The world is dynamic, and our data will decay over time, both in accuracy and pertinence. General accuracy will probably decay faster than pertinence, but the way we reach customers and they find us keeps developing. You will need to review pertinence regularly. As with general accuracy, do not be afraid to throw away the old stuff. Not only will this help you see the wood for the trees, it is a data protection requirement. You may need to learn to tolerate some ambiguity, particularly as regards completeness. This can be difficult for those of us who have grown up with fully populated data, as well as making our query writing and reporting rather more complex, but the reality of the modern world is much more to start with minimal data and then update and enhance it when and where you can. This can be quite difficult, but at least if will be effective if you have focused on the right data before striving to get the data right.
10 Jan 18.
As part of my recent audit work I have mused with colleagues as to how an organisation populated with intelligent, committed, conscientious and well-intentioned individuals can get itself into such a fine mess with its data and/or systems. Generally, things have improved as new methodologies have emerged and been honed. Nevertheless, there is still plenty of scope to go wrong, and I am going to contend that mostly this boils down to old-fashioned management issues rather than anything inherently technical. My particular worry is that managers are not taking responsibility.
I have been struck how IT departments are now viewed as a cost and have been pared down. Where development work is still done in house it may be done as sprints under an agile methodology. This is fine, and a good way to get the development done with involvement of all parties based on quick prototyping. The danger is that issues that come to light after a sprint may be left to fester. Particularly if they may seem small issues. Big bugs are easy to spot and should be caught in testing, and if not will still get fixed quickly when found. It is the little bugs or missing functionality that are more insidious. They may just work gradually corrupting your data. That incomplete lookup will cause wrong categorisation. The missing field will mean that another user field is doubled up with a comma; or was it a semicolon. Such problems may cause the users to need to run a little manual fix, or more likely a log of some sort on a shared spreadsheet. The resolution could be a small fix but doesn't fit into a sprint anywhere. IT have moved on. The close link between system and developer, possibly intermediated by a business analyst, has gone. Nobody is responsible.
The problems run deeper into our organisation populated with intelligent, committed, conscientious, well-intentioned and thoroughly nice individuals. Being well educated and experienced they are all managers. It will say so in their job title. They may manage external relationships or if they have any direct reports they are managing other managers. It is like an army with officers but no sergeants. Supervisors seemed to have been managed away. Nobody is supervising. With them has gone the quality control of data at a record level - the data that the rest of the organisation runs on.
Eventually the organisation will realise its systems/data are not fit for purpose and will investigate. Surprisingly often the perceived fix will be to build or buy a new system. There is an optimistic belief that somehow new systems will cure organisational/management problems. Even where the old system was a bespoke internal development. I could go into the reasons why this may seem a good idea to management, but that may send the cynicism way off the scale for one short blog.
But don't worry. These are not issues you need to fret about; it is not your responsibility.
7 Aug 17.
Courtesy of Rudyard Kipling
I keep six honest serving men
They taught me all I know
Their names are WHAT and WHY and WHEN
and HOW and WHERE and WHO.
27 Nov 14.
There was an excellent "Schumpter" article in the 18th October edition of the Economist on how many of the predictions of the impact of the internet have been wrong. Not just slightly wrong, but completely out of phase with reality. Schumpter looked at these three predictions:
It would be possible to write books on the failure of each of these three predictions, but it is the lack of disintermediation that I find particularly interesting. Having talked to various recruitment agencies recently in my search for contract IT work I can say that they seem to perform two functions. One is a simple pattern-matching exercise to match candidate expertise to job specification. They have lots of CVs to process, so this seems to be a strict exercise with no room for thought outside the box. It could be done by a machine. The second stage is to call and validate what the filtered set of candidates have said they can do. I have done some technical interviews. Typically they don't last long, with the candidate soon apologising for putting some acronym on their CV when actually they have only had the most fleeting exposure to it.
It seems then that intermediaries continue to thrive, and new forms of intermediation will grow, for two reasons. Firstly, the virtual world is just too big. Many people would not know where to start searching, and even if they did they may not have the time and knowledge to sift through all the data that would be returned. Secondly, that world is full of companies and people who might not be all that they first seem. Who can you trust? Most people would know that online reviews are open to abuse, and would treat them as only a rough guide. They may not have realised until recently that those price comparison sites will act as a broker, taking commission and quite possibly not showing them the best deal if more commission is available on other deals. Who guards the guards?
So how do people cope? They cope by relying on brands. They cope by relying on personal networks: the old school tie, colleagues and local contacts (we're back to the importance of proximity again). There has been disruption, but not the huge disruption that was forecast. Instead, the internet has increased barriers to entry and reinforced old behaviours. I forecasted the importance of brands 15 years ago, but didn't realise the importance of contacts, otherwise I might have gone to more old school and college dinners and joined linkedIn earlier.
13 Nov 14.
Whizzing round the Olympic velodrome last weekend with my family was great fun, if not slightly terrifying as you go up the banking. Not surprisingly data was the last thing on my mind! But at the end we received certificates which were all misnamed. This is pertinent for a couple of reasons. Firstly, it shows there is a manual interface in place. Somebody had copied our names from a screen (our online booking), or printout onto a piece of paper and passed that paper to someone else to create the certificates on another system. That interface was obviously seriously error prone. Secondly my name is John Davis. It is quite common (I blame my parents), and most clerical workers would know that it could be John or Jon, and Davis might be Davies. My son, Guy Davis, became Gille Davies.
So the great experience was tainted by a certificate that was wrong for every single member of the family. No one had thought through the consequences of the data quality issues and had taken a bit more care. We won't be framing it and putting it on our wall, or tweeting a picture of the certificate or sharing on Facebook - losing the Velodrome valuable free marketing. These issues with missing interfaces and poor data quality are the same ones that I came across when I started working in computing and business analysis in the 1980s. Things may have improved for a while, but now that companies have some applications in the cloud we have returned to a situation of many silos, and often some imperfect interfaces between them.
A focus on data quality is an issue that will run on. This is partly an issue with supervision and the reinforcement of the importance of getting data correct before you can rely on it to drive your business (It needs that great MBA stalwart, "Senior Management Commitment"). This will get no easier as the workforce is infiltrated by a younger generation who are accustomed to txting and spreading their focus across a number of apps on a number of devices, with a commensurate loss of attention to detail (am I sounding old now?). Nevertheless we can do better, taking out scribbled manual interfaces by calling APIs, using our systems to do more validation and checking at the point of original data capture, and reconciling data between silos and against other sources. Not only is this more efficient, companies get happy customers as it shows they care about the little things - our names, and who we are!
If you need help with your processes, data, or interfaces, then give me a call. I'm the red and black blur.
4 Nov 14.