top of page

Over 20+ years of professional experience, Reggie has honed skills across multiple domains in data science, network architecture and engineering.  Here is just a small sample of work product that demonstrates the value he brings to the table.

Listed below is just a sample of some work that is shareable.  Reggie has produced over 200+ analyses using R, SQL, Hadoop/Hive that range from in-depth enterprise wide work to quick-turn, ad-hoc work for leader up to executive level to support decision making and drive scalable results.

​

For more info on RAN strategy and network architecture work, please contact me and I would love to talk about work in RAN centralization, vRAN, fronthaul, O-RAN, open interfaces, RIC, orchestration, xApps/rApps, 3GPP alignment, and more.

Championed centralization of RAN infrastructure  with O-RAN compliant interfaces.  Challenged financial models & transformed resiliency models. 

  • Presented at multiple global conferences and to executives across enterprise.

  • Developed centralized infrastructure architecture that reduced cost, improved performance, enhanced resiliency, and prepared for future technologies.

  • Transformed traditional distributed RAN architecture into mini data centers at the edge.

  • Engaged with industry forums, standards bodies, vendor partners, and other operators to drive greater awareness and adoption.

  • ​Tested proofs-of-concept in a lab environment designed and integrated by Reggie.

  • Conducted RFP's and awarded business to vendors.

  • Provided scalable solutions to the enterprise complete with documentation.

  • Served as a central escalation point during initial deployments and mentor to cross-functional groups.

ran performance centralization.JPG
centralization.JPG

TensorFlow machine learning models for predicting network throughput and performance classification.

download.jpg
  • Provided extremely accurate throughput predictions in critical performance regions that were unmatched by any previously used linear regression techniques.

  • Curated data from network performance measures.

  • Trained 32 node, 4 layer model to predict future throughput performance with extreme accuracy.

Developed capacity forecasting and capital planning platform.

  • Platform drives capital investments of $500M+ annually.

  • Enhanced accuracy over other methods, deferred $62M in capital in the first year in use.

  • Complete end-to-end solution written in SQL, R, Hadoop/Hive, and Shiny.

  • Aggregates 100's of millions of data points from 100k+ network elements into a manageable data structure.

  • Utilizes ARIMA methodologies for time-series forecasting.

  • Integrates an M/G/1 queueing theory model (also developed by Reggie) and linear regression to refine model accuracy.

  • Back testing and error analyses ensure model performance does not drift or degrade.

  • A GUI written in Shiny complimented the tool and made it accessible to more users.

  • Platform is still in use today and drive decisions at the executive level of more than $500M annually.

  • Continue to serve as escalation point and advisor to make code updates or adopt new network capabilities into the algorithms.

Developed and applied queueing theory to build network capacity models. 

  • Models saved $10M+ in capital spend annually.

  • Reduced over provisioning of the network and de-risked capacity deployments.

  • Queueing models derived from work often applied to data center infrastructure, but adapted and applied to telecom infrastructure.

  • Models translated into excel spreadsheets for easy use by any audience and use case.

  • Models also coded into R and SQL to provide a scalable solution at the enterprise level.

  • Developed alternative solutions to cover additional domains in the network such as the transport layer.

  • Models are still in use today and drive decision making at executive level of $500M+ annually.

MG1 equation.JPG

Monte Carlo simulations to assess capacity impacts of emerging technologies. 

  • Simulations de-risked an upcoming technology deployment by addressing growing capacity concerns.  Avoided severe over provisioning of the network at a high cost.

  • Coded an example of a RAN scheduler algorithm into R with random sampling for iterations.

  • Provided a system model and digital twin of the network to iterate through many scenarios quickly.

  • The iteration speed provided an range of probabilistic outcomes.

  • All simulation and visualization written in R.

  • Work was presented at executive level and later published in journals.

volte 3.JPG
volte.jpg
volte2.jpg
11277_2017_4729_Fig6_HTML.webp

Integrated time-series intervention algorithms to consider effects of outliers on a growing network.

  • Algorithm prevented false positives or missed events due to level shifts or outliers in the historical data.

  • Outliers can be caused by spikes in traffic such as concerts, weather events, outages, etc.

  • Sudden shifts in trends can be caused when adding infrastructure the the network that takes on new traffic suddenly.

  • Both can lead to unnecessary capital expenditures.

  • The intervention algorithms normalize the outliers and can adjust historical data to maintain trends but reflect minimize the impact of a level shift.

  • These algorithms led to a more accurate forecasting tool.

TensorFlow ML models to predict price movement financial markets.

  • Curated data sets from financial markets including price and volume.

  • Enriched data set manually with technical indicators such as bollinger bands, simple moving averages, news sentiment indexes, and other macro economic indicators.

  • ​Trained model with dimensions of 64 nodes on the input layer and 5 layers deep.

  • Output delivered both a predicted price movement value as well as a broader probability that the price would be either higher or lower.

  • Turned out to be a good academic exercise, did not execute any trades with the model.

  • Challenges included:

    • Access to quality, real-time data for a retail investor is cost prohibitive.​

    • overfitting plagued the training process, discovered during backtesting.

download.jpg

Multi-Variate linear regression modeling to predict future network performance. 

  • Regression models allowed future network performance to be predicted according to the trends of multiple inputs that are all critical drivers of network performance.

  • Predictions allowed for short term action to be executed proactively.

  • Allowed for anomaly detection by identifying outlier sites that were not behaving within the boundaries of the general models.

Integrated emulators to a lab environment.  De-risked production deployments of new features and infrastructure.

  • Researched, procured, and integrated a UE emulator to the lab environment.

  • Allowed for a wide range of use case testing and corner case validation that is otherwise not possible in a lab environment.

  • This capability accelerated deployment timelines by reducing testing in production.

  • De-risked production deployments by identify, debugging, and resolving issues in the lab before reaching the production environment.

  • Wrote code in R to collect performance logs, clean the data, and provide automated visualization to allow for faster root cause analysis (RCA) in the lab and faster testing iterations.

dreamstime_l_30211417.jpg

UL Interference Analysis - Pattern Recognition and Time Series Trending

  • Developed, automated, and deployed algorithms to identify UL interference that would degrade network performance, degrading the customer experience and driving financial impacts.

  • Finding the interference was often time consuming and not successful due to difficulty in predicting the pattern and location.  Required a lot of sitting and waiting for it to occur.  Ineffective and costly.

  • Deployed tools to recognize the interference, define the periodicity of occurrences as well as the location of the interference within the spectrum band.

  • Saved SIGNIFICANT man hours and frustration for the field operations staff.

interference 3.JPG

Enterprise lead for conducting trials on critical infrastructure and customer facing features. 

dreamstime_l_66513960.jpg
  • Regularly designed trial parameters, success criteria, and exit criteria

  • Trials often covered new pricing plans, customer experience treatment, network features, new hardware, and much more.

  • Automated all ETL, analysis, dashboard updating, and dashboard delivery.

  • This allowed resources to focus on decision making rather than manually parsing data.

  • Dashboard delivered to all leadership levels up to executive level.

  • Served as primary escalation point across the enterprise for any areas of concern.

  • Led working sessions across cross functional teams to ensure enterprise interprets data as intended during critical phases of all trials.

  • Written end-to-end in R, SQL

Macro economics dashboard for personal finance and research (because why not). 

  • I consider myself financially literate and enjoy staying educated on the state of the financial markets and the US in general.

  • I stay up to date on data from all of the federal reserves, publicly available sentiment indicators, market indices, data subscriptions, quant data sources, and other sources.

  • To make it easier to digest, I have scripted all the ETL with links to all the sources in R.

  • Everything is parsed and prepared in an automated delivery for me to review.

  • This saves me time and keeps me educated by making it easy to consume the information from one place.

  • All charts are interactive using the plotly library available in R.

economic.JPG
economic 2.JPG

Developed network digital twin with suite of growth models to assess financial impact of future infrastructure architecture decisions.

  • Digital twin model guided strategic conversations at the executive level to assess various architecture choices.

  • System model provided clarity while still being able to iterate various choices and scenarios quickly.

  • Once decision point was reached, model quickly guided subsequent capital allocation accordingly.

  • All written in R and visuals organized using knitr.

Recorded and published technical training videos on network modeling for use across the enterprise.

  • Recorded training videos on complex aspects of my work.

  • This included throughput modeling, queueing theory models, quality-of-service (QoS) in networks.

  • Schedules are busy, this allowed for a broader reach to educate others across the enterprise.

  • These are also complex topics that may require multiple times with the topic to begin digest, videos make it easy to review the information as time allows.

© 2023 by Reggie Collette. All rights reserved.

bottom of page