Increasing Production Rates Within the SCAQMD’s Permitting Process, Part II: A Process-Improvement Case Study

by | Feb 16, 2018 | Environmental Compliance

SCAQMD Permitting Process: Process-Improvement Case Study

(Credit: Pugo Design Studio)

This is the second in a two-part article discussing the current state of the SCAQMD permitting-process backlog. The first part described the current state of affairs of the SCAQMD permitting backlog.

Last week, I discussed the massive SCAQMD permitting backlog — the reasons behind it and some of the agency’s proposed solutions. This week, I’d like to put forth a solution I’ve been pondering for quite some time that involves some rather forward-thinking technology. Although it might seem a bit futuristic at times, stay with me. I’m going to walk you through this so you can see just how beneficial such a solution can be.

Technology and Artificial Intelligence as a Possible Production Solution

When it comes to fixing the SCAQMD backlog, current advances in technology could increase the permit-processing capacity.

Indeed, the draft approach established by the SCAQMD does include some examples of solutions that use technology. However, I believe that in order to achieve the production increases the SCAQMD is looking for, the agency should look to the success of other industries and turn to machine learning (ML), artificial intelligence (AI), and natural-language processing (NLP).

You’re probably already familiar with AI, but maybe not so much with NLP and ML, which are subsets of general AI. Machine learning, on the other hand, is in turn a subset of AI, in which computers are able to “learn” via computer algorithms. Examples of problems being solved with ML include spam filtering and face detection, both of which require time and repetition for the computer to learn. Forbes.com has a more-detailed yet easy-to-grasp explanation. The final piece, NLP is the ability for a computer to interpret language as it is spoken or written — with no special translation to computer-ese required by a human. Here’s more from Forbes on that bit of tech.

Think of a virtual AI assistant such as Alexa or Siri: When you first started using it, the system may not have understood your speech at first, but over time, it’s learned who you call most often, where you drive to on a regular basis, and even what your favorite songs are.

Advances in complex algorithms that allow computers to understand text the way humans do has been on the rise in both the academic and industrial spheres. And the development of closed- and open-source libraries and services for NLP can allow for a quick execution and completion of solutions that are based on this technology.

Examples of Prior Art That Use Machine Learning and Natural Language Processing

It should be understood that my proposed solution is not designed to be the ultimate solution, but rather one possible solution to the overall problem. But when you review other industries’ use of NLP, its use in permit processing seems logical.

A review of prior art (a fancy term for existing technology and proposed inventions) reveals the use of NLP in several industries. Stanford University has developed NLP models that can be used to determine authorship in signed judicial opinions with an extremely high rate of accuracy. Some companies are using NLP to analyze contracts to identify sections that could pose issues to a specific party. Even the finance and construction-management sectors have been using NLP for various functions, including, in the latter case, automated compliance checking.

And if you think that the permitting process is too complex to replicate for AI and ML, consider that areas once considered subjective are now using this technology. It’s long been argued that the arts were one arena where the human mind just can’t be replaced. But even that hurdle has been passed. In 2016, the short film “Sunspring” was written entirely using AI. (Decide for yourself if it stacks up against the competition by watching it here.)

How Machine Learning Can Be Used to Expedite Processing of Permit Applications

Machine learning can be broken down into several different categories and two of the largest ones include  supervised and unsupervised learning.

Unsupervised learning includes computer algorithms that are used to draw inferences from a data set. In an unsupervised learning model, there are no “right or wrong” answers — just clusters of data from which inferences can be drawn.

On the other hand, supervised learning makes use of algorithms that use pairs of data (like an independent and dependent variable) from which to draw inferences. One such example is an email filter, which, over time, learns whether or not a message should be marked as spam.

An ML model built to process permit applications would need to analyze rule evaluations and other information, in order to inform the permit engineer if what the applicant has proposed can comply with the SCAQMD’s regulations.

Using an ML model to complete rule analysis or even checking if a fee calculation is correct are logical starting points for using ML algorithms to increase production. Just look at the examples of prior art above. Then consider the gargantuan amounts of data the SCAQMD already has due to the thousands of permits it’s processed over the years. Each of those permits already has a desired output (i.e., approval or rejection) assigned to it. In other words, we’ve already got a data set to work with. And trust me, it’s ginormous.

The SCAQMD also has many different types of data on a facility — type, New Source Review balance, facility operation, rules that apply to the facility, etc. — which can be used to develop these models. That means that a trained model, coupled with a way electronically harvest data from past and current permit applications, could very well increase production and thus shorten the SCAQMD’s permit-processing timelines.

In such a scenario, a permit engineer would be able to scan the permit application into an AI/ML model that would then extract information needed to process the application, perform a complete rule analysis, verify emission rates, etc. As a quality-control measure, the model could be developed to provide an output as a percentage (as opposed to a simple pass/fail outcome) in order to flag those applications that require a more in-depth review.

How to Test the Idea

I’m the first to admit that automation on a process like what I’m proposing is an ambitious undertaking, so a series of proofs of concept would be necessary before formerly implementing the system. (That’s the scientific way to go about it.)

Developing a proof of concept can take on any variety of forms. Below is an outline for one. In each of these steps, we’re working with the constraints of the permitting process but not leaving any possible solution off the table.

  1. Map It Out. Develop a workflow that outlines the permitting process as a series of checkpoints, as well as the distribution of permits at each checkpoint. For example, the first checkpoint might just be whether or not the application is complete, while another might be making sure the submitted fee is accurate. We then take those 7,348 backlogged permits and see what checkpoints each is stuck at.Select one checkpoint to study. The selected checkpoint should be large enough to impact the overall process but small enough that the test doesn’t become overly complicated. This will be our Sample Checkpoint.
  2. Create a “Sprint.” I took this term from the approach outlined in the book Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days. Simply put, it involves setting a short timeline (say, five or 10 days) to test the Sample Checkpoint pinpointed in Step #1. So we’re really just isolating one small problem in the whole mess of the backlog and focusing all our energy on tackling it.
  3. Create a Framework. Develop a high-level framework of how an ML/AI model can be developed to move through the checkpoint. Using the above example, we’d use our ML/AI/NLP software to develop an algorithm that checks whether or not the applicant has submitted the proper fee.
  4. Develop a Prototype. We next build a model to process the information of the Sample Checkpoint. At this point, the key is not to generate a production model but to create a working prototype that can process information. We will also need to identify the NLP tools currently available in order to cut down on the time needed to develop a prototype.
  5. Beta Test. Test the model on a population of data and collect information about the performance of the model.
  6. Rinse and Repeat. Study the data and make changes in order to optimize the model.

Closing

Yes, this is an ambitious proposal and will involve a change in behavior and a major deviation from the status quo — not to mention an increased budget. However, innovation is not about delivering the “better sameness” (something that is better, faster, and cheaper). Rather, it’s about leveraging what is currently available to look at a problem in a different way.

In this case, a better sameness is processing permits faster, using more people, working more hours. What I am suggesting is that we use something that is not currently being used in the compliance field (ML/AI) to approach the permit-processing problem.

That’s innovative. But I may be biased. 🙂