Careers

Job Title: Senior Data Engineer for Healthcare  (Remote)

Job Description:
As a Senior Data Engineer, you will play a crucial role in designing, developing, and maintaining our data infrastructure to support the organization's data needs. You will collaborate with cross-functional teams to ensure the efficient flow of data and contribute to the overall success of our data initiatives. Certifications of any kind are helpful for marketing purposes.

Responsibilities:
Database Design and Maintenance: Design, implement, and maintain diverse database platforms that meet the organization's requirements for scalability, performance, and reliability. Perform regular database optimization and tuning to ensure optimal performance.

ETL/ELT Development:
Develop and maintain ETL processes to extract, transform, and load data from various sources into the SQL Server databases. Ensure data quality and integrity throughout the ETL process.

Data Modeling:
Design and implement data models that align with business requirements and support reporting and analytics needs. Collaborate with data architects to ensure consistency and standardization in data models.

Collaboration:
Work closely with data scientists, analysts, and other stakeholders to understand data requirements and provide efficient solutions. Collaborate with IT and development teams to integrate data solutions into existing systems.

Monitoring and Optimization:
Implement and maintain monitoring solutions to proactively identify and address issues related to data processing and storage. Optimize queries and database performance to ensure efficient data retrieval.

Mandatory Technical Qualifications:

Healthcare Industry Knowledge
Department of Health and Human Services (HHS); Health Insurance Portability and Accountability Act (HIPAA certified), Protected/Personal Health Information (PHI); Consolidated Omnibus Budget Reconciliation Act (COBRA). Healthcare information management and delivery systems (HIS); Healthcare data: Demographic, socioeconomic, financial, clinical, administrative information; Providers, medical decision-making process; Continuum of care; Medical record number (MRN), Patient account number, Episode of Care (EOC), Encounters, Stays and Outcomes; Inpatient vs. Outpatient and intake process; Patient demographic, financial and clinical tracking and flow of data using HL7. Code sets: (International Classification of Diseases) ICD-10-CM, ICD-10-PCS, HCPCS Level I Current Procedural Terminology (CPT) and Level II (Supplies & Services); UB-04, 835, 837; Public and Commercial Payers, Self-Insured , Guarantors, Fee-For-Service, Value Based Purchasing; Center or Medicare Medicaid Services (CMS); Medicare Part A (Inpatient Services), B (Outpatient Services), C (Advantage Plan), D (Prescription Drugs); Medicare Severity-Diagnosis Related Groups (MS-DRG) , All Patient Refined-Diagnosis Related Groups (AP-DRG), Hospital Consumer Assessment Healthcare Providers and Systems (HCAHPS), CMS Reimbursements and Continuous Quality Improvement (CQI).  

Development Toolset:
Azure Synapse Analytics, SQL Azure, On-Premises Microsoft SQL Server; SSDT, SSMS, SSIS, SSRS, SSAS (Multi-Dimensional, Tabular); Azure Data Studio, Azure Data Factory, Azure Blob and Data Lake Storage; Microsoft Visual Studio, Apex SQL, Mirth Connect; Advanced T-SQL (DCL, DDL, DML), Spark SQL, Postgres SQL, MySQL, HiveQL, MDX, DAX, JavaScript, HL7, CSV, XML, JSON; SSAS Data Mining Decision trees, k-Means Clustering and Linear Regression; HDFS, Databricks, Cloudera; Tableau, PowerBI, ReportBuilder, PowerPivot, Power Query and MS Office.

Enterprise Data Warehouse Development:
BISM, UDM, EDW and ODS development; Bus matrix, star and snowflake schema development; SCD including: Conforming, role playing, many-to-many, junk and outriggers; Transactional, periodic, accumulating snapshot fact tables; Surrogate and natural keys; Degenerate dimensions and inferred members; Additive, semi-additive and non-additive measures; ROLAP, HOLAP, MOLAP; Pre-aggregation, entity attribute hierarchies, partitioning, calculated measures; ETL/ELT from external data sources and internal database systems; Cross-joining of data marts on conforming dimensions; Query execution analysis, DWU scaling, resource classes, ASA table distributions (round robin, hash, replicate), partitions, clustered and non-clustered column-store indexes and indexed views. Trino, Kubernetes, Komodo/Prism.

Online Transaction Processing:
A number of data sources come from OLTP systems via CSV, HL7, FHIR, XML, JSON, etc.; therefore, the candidate must have a solid understanding of EMR and EHR systems.

T-SQL language, RDBMS, 1,2,3 Normal Form; Transaction Parent-Child table and Lookup table relationships; Database schemas with One-to-One, One-to-Many, Many-to-One, Primary Key, Foreign Key and Unique Constraints; Clustered, Non-Clustered Columnstore Indexes, Cascade updates and deletes; On-Prem vs Azure Cloud. Database objects: Tables, Views, Triggers, Stored Procedures, Functions, CTE, Temp Tables, Rank, Function; Concurrency, Implicit and Explicit Transaction Processing.

Database Administration:
In addition to having the opportunity in working with client DBA's, you must also have solid experience in the following technical skills if client's don't have internal DBA resources. 

Installation and configuration of multiple SQL Server instances including Integration, Reporting and Analysis services; Creation and management of databases, filegroups, recovery models, TDE; SQL Server security; Backup maintenance plans: Full, differential and transaction log backups; Point-in-time recovery, Disaster recovery; Database cleanup, Transaction Log maintenance, Rebuilding indexes and statistics; Transaction isolation level locking for concurrency; Performance monitoring, Query tuning optimization; Query execution plan analysis, SQL profiler, DMVs, SQL audit, Extended events, Change Data Capture, Activity monitor and Query store reporting; Replication.

Big Data:
HDFS storage of structured, semi-structured and unstructured data. HortonWorks and Cloudera using Hive and Databricks using Spark SQL for extracting data from structured, semi-structured and un-structured data sources.

This position requires the individual to hit the ground running from developing quick solutions to resolve anomalies to developing greenfield projects.

Please submit your resume and a cover letter explaining why you are the ideal candidate for this position to PatientGo2@gmail.com. 

patientgo2@gmail.com