Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
The investing world has a significant problem when it comes to data about small and medium-sized enterprises (SMEs). This has nothing to do with data quality or accuracy β itβs the lack of any data at all.Β
Assessing SME creditworthiness has been notoriously challenging because small enterprise financial data is not public, and therefore very difficult to access.
S&P Global Market Intelligence, a division of S&P Global and a foremost provider of credit ratings and benchmarks, claims to have solved this longstanding problem. The companyβs technical team built RiskGauge, an AI-powered platform that crawls otherwise elusive data from over 200 million websites, processes it through numerous algorithms and generates risk scores.Β
Built on Snowflake architecture, the platform has increased S&Pβs coverage of SMEs by 5X.Β
βOur objective was expansion and efficiency,β explained Moody Hadi, S&P Globalβs head of risk solutionsβ new product development. βThe project has improved the accuracy and coverage of the data, benefiting clients.βΒ
RiskGaugeβs underlying architecture
Counterparty credit management essentially assesses a companyβs creditworthiness and risk based on several factors, including financials, probability of default and risk appetite. S&P Global Market Intelligence provides these insights to institutional investors, banks, insurance companies, wealth managers and others.Β
βLarge and financial corporate entities lend to suppliers, but they need to know how much to lend, how frequently to monitor them, what the duration of the loan would be,β Hadi explained. βThey rely on third parties to come up with a trustworthy credit score.βΒ
But there has long been a gap in SME coverage. Hadi pointed out that, while large public companies like IBM, Microsoft, Amazon, Google and the rest are required to disclose their quarterly financials, SMEs donβt have that obligation, thus limiting financial transparency. From an investor perspective, consider that there are about 10 million SMEs in the U.S., compared to roughly 60,000 public companies.Β
S&P Global Market Intelligence claims it now has all of those covered: Previously, the firm only had data on about 2 million, but RiskGauge expanded that to 10 million. Β
The platform, which went into production in January, is based on a system built by Hadiβs team that pulls firmographic data from unstructured web content, combines it with anonymized third-party datasets, and applies machine learning (ML) and advanced algorithms to generate credit scores.Β
The company uses Snowflake to mine company pages and process them into firmographics drivers (market segmenters) that are then fed into RiskGauge.Β
The platformβs data pipeline consists of:
- Crawlers/web scrapers
- A pre-processing layer
- Miners
- Curators
- RiskGauge scoring
Specifically, Hadiβs team uses Snowflakeβs data warehouse and Snowpark Container Services in the middle of the pre-processing, mining and curation steps.Β
At the end of this process, SMEs are scored based on a combination of financial, business and market risk; 1 being the highest, 100 the lowest. Investors also receive reports on RiskGauge detailing financials, firmographics, business credit reports, historical performance and key developments. They can also compare companies to their peers.Β
How S&P is collecting valuable company data
Hadi explained that RiskGauge employs a multi-layer scraping process that pulls various details from a companyβs web domain, such as basic βcontact usβ and landing pages and news-related information. The miners go down several URL layers to scrape relevant data.Β
βAs you can imagine, a person canβt do this,β said Hadi. βIt is going to be very time-consuming for a human, especially when youβre dealing with 200 million web pages.β Which, he noted, results in several terabytes of website information.Β
After data is collected, the next step is to run algorithms that remove anything that isnβt text; Hadi noted that the system is not interested in JavaScript or even HTML tags. Data is cleaned so it becomes human-readable, not code. Then, itβs loaded into Snowflake and several data miners are run against the pages.
Ensemble algorithms are critical to the prediction process; these types of algorithms combine predictions from several individual models (base models or βweak learnersβ that are essentially a little better than random guessing) to validate company information such as name, business description, sector, location, and operational activity. The system also factors in any polarity in sentiment around announcements disclosed on the site.Β
βAfter we crawl a site, the algorithms hit different components of the pages pulled, and they vote and come back with a recommendation,β Hadi explained. βThere is no human in the loop in this process, the algorithms are basically competing with each other. That helps with the efficiency to increase our coverage.βΒ
Following that initial load, the system monitors site activity, automatically running weekly scans. It doesnβt update information weekly; only when it detects a change, Hadi added. When performing subsequent scans, a hash key tracks the landing page from the previous crawl, and the system generates another key; if they are identical, no changes were made, and no action is required. However, if the hash keys donβt match, the system will be triggered to update company information.Β
This continuous scraping is important to ensure the system remains as up-to-date as possible. βIf theyβre updating the site often, that tells us theyβre alive, right?,β Hadi noted.Β
Challenges with processing speed, giant datasets, unclean websites
There were challenges to overcome when building out the system, of course, particularly due to the sheer size of datasets and the need for quick processing. Hadiβs team had to make trade-offs to balance accuracy and speed.Β
βWe kept optimizing different algorithms to run faster,β he explained. βAnd tweaking; some algorithms we had were really good, had high accuracy, high precision, high recall, but they were computationally too costly.βΒ
Websites do not always conform to standard formats, requiring flexible scraping methods.
βYou hear a lot about designing websites with an exercise like this, because when we originally started, we thought, βHey, every website should conform to a sitemap or XML,ββ said Hadi. βAnd guess what? Nobody follows that.β
They didnβt want to hard code or incorporate robotic process automation (RPA) into the system because sites vary so widely, Hadi said, and they knew the most important information they needed was in the text. This led to the creation of a system that only pulls necessary components of a site, then cleanses it for the actual text and discards code and any JavaScript or TypeScript.
As Hadi noted, βthe biggest challenges were around performance and tuning and the fact that websites by design are not clean.βΒ