Data/ETL Developer

Baltimore, MD
Full Time
SER-TRI-BAL-10082025-0002
Mid Level
TriTech Enterprise Systems, Inc. (TriTech) is seeking a "Data/ETL Developer" to support a State of Maryland contract.  This is a hybrid position that is location in Baltimore, Maryland.  The candidate will be responsible for designing, building, and maintaining data pipelines and infrastructure to support data-driven decisions and analytics.  The candidate(s) tasks will include the following:
  • Design, develop and maintain data pipelines, and extract, transform, load (ETL) processes to collect, process and store structured and unstructured data
  • Build data architecture and storage solutions, including data lakehouses, data lakes, data warehouse, and data marts to support analytics and reporting
  • Develop data reliability, efficiency, and qualify checks and processes
  • Prepare data for data modeling
  • Monitor and optimize data architecture and data processing systems
  • Collaboration with multiple teams to understand requirements and objectives
  • Administer testing and troubleshooting related to performance, reliability, and scalability
  • Create and update documentation
Additional Responsibilities: In addition to the responsibilities listed above, the individual will also be expected to perform the following using data architecture and modeling technique to do the following:
  • Design and implement robust, scalable data models to support PMM application, analytics, and business intelligence initiatives
  • Optimize data warehousing solutions and manage data migrations in the AWS ecosystem, utilizing Amazon Redshift, RDS, and DocumentDB services
ETL Development:
  • Develop and maintain scalable ETL pipelines using AWS Glue and other AWS services to enhance data collection, integration, and aggregation
  • Ensure data integrity and timeliness in the data pipeline, troubleshooting any issues that arise during data processing
 Data Integration:
  • Integrate data from various sources using AWS technologies, ensuring seamless data flow across systems
  • Collaborate with stakeholders to define data ingestion requirements and implement solutions to meet business needs
Performance Optimization:
  • Monitor, tune, and manage database performance to ensure efficient data loads and queries
  • Implement best practices for data management within AWS to optimize storage and computing costs
Security and Compliance:
  • Ensure all data practices comply with regulatory requirements and department policies
  • Implement and maintain security measures to protect data within AWS services
Team Collaboration and Leadership:
  • Lead and mentor junior data engineers and team members on AWS best practices and technical challenges
  • Collaborate with UI/API team, business analysts, and other stakeholders to support data- driven decision- making
Innovation and Continuous Improvement:
  • Explore and adopt new technologies within the AWS cloud to enhance the capabilities of the data platform
  • Continuously improve existing systems by analyzing business needs and technology trends
 Education:
  • This position requires a bachelor’s or master’s degree from an accredited college or university with a major in computer science, statistics, mathematics, economics, or related field.
  • Three (3) years of equivalent experience in a related field may be substituted for the bachelor’s degree.
 General Experience:
  • The proposed candidate must have a minimum of three (3) years of experience as a data engineer.
Specialized experience:
1.
T
he candidate should have experience as data engineer or similar role with a strong understanding of data architecture and ETL processes.
2. T
he candidate should be proficient in programming languages for data processing and knowledgeable of distributed computing and parallel processing:
  • Minimum 5 + years ETL coding experience
  • Proficiency in programming languages such as Python and SQL for data processing and automation
  • Experience with distributed computing frameworks like Apache Spark or similar technologies
  • Experience with AWS data environment, primarily Glue, S3, DocumentDB, Redshift, RDS, Athena, etc.
  • Experience with data warehouses/RDBMS like Redshift and NoSQL data stores such as DocumentDB, DynamoDB, OpenSearch, etc.
  • Experience in building data lakes using AWS Lake Formation
  • Experience with workflow orchestration and scheduling tools like AWS Step Functions, AWS MWAA, etc.
  • Strong understanding of relational databases (including tables, views, indexes, table spaces)
  • Experience with source control tools such as GitHub and related CI/CD processes
  • Ability to analyze a company’s data needs
  • Strong problem-solving skills
  • Experience with the SDLC and Agile methodologies

TriTech is an Equal Opportunity Employer.
 
Share

Apply for this position

Required*
Apply with Indeed
We've received your resume. Click here to update it.
Attach resume as .pdf, .doc, .docx, .odt, .txt, or .rtf (limit 5MB) or Paste resume

Paste your resume here or Attach resume file

Human Check*