🚀 𝗪𝗲'𝗿𝗲 𝗛𝗶𝗿𝗶𝗻𝗴! 🚀 ApTask is seeking an experienced AWS Python Data Engineer with Lambda API experience for a hybrid position in Reston, VA. If you have 10+ years of experience and strong skills in Python, Pandas, NumPy, and Pyspark, we want to hear from you! 𝗝𝗼𝗯 𝗗𝗲𝘀𝗰𝗿𝗶𝗽𝘁𝗶𝗼𝗻: - Extensive Python development experience - Expertise in AWS services (S3, RDS, EC2, Lambda, SQS, SNS, Redshift) - Prior experience at Fannie Mae is a plus - Proficient in Java and Database (Oracle, Postgres) - API and Lambda experience 📧 𝗦𝗲𝗻𝗱 𝘆𝗼𝘂𝗿 𝗿𝗲𝘀𝘂𝗺𝗲 𝘁𝗼 𝗮𝗯𝗵𝗶𝘀𝗵𝗲𝗸𝗸@𝗮𝗽𝘁𝗮𝘀𝗸.𝗰𝗼𝗺 #Hiring #DataEngineer #AWS #PythonDeveloper #Lambda #API #Pandas #NumPy #Pyspark #Java #Database #RestonVA #TechJobs #CareerOpportunity #FannieMae #TechHiring #JobOpening #HybridJobs
ApTask’s Post
More Relevant Posts
-
Senior Technical Recruiter, Hiring for AI/MLEngineer, AIBusinessAnalyt, AIRiskAnalyst, AISystemAnalyst
#hiringalert #W2_and_C2C_Acceptable #opentowork #activelylooking #Business_Systems_Analyst #Mountain_View_CA (#Remote ok – should work in PST) #Required_Skills : 5 years of work experience involving #quantitative_data_analysis Advanced #SQL skills to get the data you need from a #datawarehouse and perform #data_segmentation and #aggregation from scratch. #Data_query and data processing tools/systems (e.g., relational, NoSQL, stream processing) Familiarity with #AWS (#Redshift, #Athena, and AWS core concepts) Familiarity with #Data_modeling and #schema_design Proficient in analytical and #datamodeling tools, such as #Python, #R, or #PyCharm Please share the resume at sachin@amaglobaltech.com
To view or add a comment, sign in
-
Certified - SQL Dev ® - OCI® Server Integration || Azure DevOps || Python || MySQL || MongoDB || RESTful APIs || AWS || Azure || OCI
Hi folks, we are #hiring for multiple positions at #NTTDATA. If anyone is interested, please send your resume to this email address: swamy.j@tekwissen.in Sr Data Engineer 5+ years experience Good Understanding of scala and spark Key skill: Scala and spark • Strong demonstrable experience in system solutions design ( Coming from a development background )andhands-on with Java/J2EE or python • Strong Experience with configuration tools like Openshift, Kubernetes, DockerCoding & Trouble Shooting Exp • Strong experience with Setup Load Balancer,Mutual Auth setup, Nginx configuration and SSL/experience with CI tools ( Jenkins,TeamCity ) and Build tools ( Maven,Gradle,SBT #OpenToWork #JobSeeker #Hiring #JobSearch #LookingForWork #NewJob #JobHunt #CareerOpportunity #Employment #Resume #JobSeeking #Opportunity #ReadyToWork #CareerSearch #NowHiring #JobMarket #AvailableForWork #JobWanted #JobOpening #WorkSearch #DataEngineer #DataEngineering #BigData #ETL #DataWarehousing #DataProcessing #DataIntegration #SQL #NoSQL #DataPipeline #DataAnalytics #DataScience #CloudComputing #Python #Hadoop #Spark #DataTransformation #Database #DataManagement #ETLJobs #DataJobs #TechJobs#Scala #FunctionalProgramming #Programming #DistributedComputing #ScalaLanguage #ScalaDevelopment #TypeSafe #ScalaCommunity #Akka #PlayFramework #ScalaEngineer #ScalaJobs #ScalaProgramming #ConcurrentProgramming #FunctionalLanguages #ScalaDeveloper #ScalaCoding #ScalaSkills #JVM #ReactiveProgramming #ScalaEcosystem#Spark #ApacheSpark #BigData #DataProcessing #DataAnalytics #DataScience #DistributedComputing #SparkFramework #SparkCluster #SparkJobs #SparkProgramming #SparkDevelopers #Scala #Python #MachineLearning #DataEngineering #ETL #Hadoop #DataTransformation #DataLake #SparkSQL #SparkML #BigDataAnalytics
To view or add a comment, sign in
-
Dear Network, We are seeking #PythonDeveloper #Neo4J #Kafka #DataIntegration #CDC #CI/CD #SoftwareEngineering #GraphDatabase #DataEngineering #HartfordJobs #ConnecticutTech #TechJobs #PythonJobs #SoftwareDevelopment #DataProcessing #JobOpening #HiringNow #DeveloperJobs #TechCareer #Programming #SoftwareTesting #UnitTests #Documentation #AgileMethodology
Dear Network, RSN GINFO SOLUTIONS is seeking: Job Title: Python Developer - Neo4J Graph Database Integration Location: Hartford, Connecticut We are seeking an exceptional Python Developer, who plays a crucial role in developing an application to integrate Neo4J graph database with Kafka topics. Responsibilities: Develop Python application to seamlessly integrate Neo4J graph database with Kafka topic. Design and implement efficient data loading mechanisms to handle large volumes of transactions using change data capture (CDC) techniques. Collaborate with cross-functional teams including data engineers, data scientists, and software developers to ensure smooth integration and optimal performance. Write clear, maintainable, and well-documented code following best practices. Develop comprehensive unit tests to ensure the reliability and robustness of the application. Create detailed technical documentation to facilitate ease of understanding and future maintenance. Implement continuous integration and continuous delivery (CI/CD) pipelines for the Python application. Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience (5+ years) working as a Python Developer, preferably in a data-intensive environment. Strong proficiency in Python programming language and experience with Python frameworks (e.g., Flask, Django). In-depth understanding of Neo4J graph database and experience with graph data modeling. Hands-on experience with Apache Kafka and knowledge of Kafka Connect for data integration. Familiarity with change data capture (CDC) techniques and real-time data processing. Solid understanding of software development lifecycle (SDLC) and agile methodologies. Experience with writing unit tests using testing frameworks such as pytest. Excellent communication skills and ability to collaborate effectively with cross-functional teams. Prior experience implementing CI/CD pipelines using tools like Jenkins, GitLab CI, or similar. Preferred Qualifications: Master's degree in Computer Science or a related field. Experience working with cloud platforms such as AWS, Azure, or Google Cloud Platform. Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes. Understanding of streaming data processing frameworks such as Apache Flink or Apache Spark Streaming. Familiarity with data visualization tools such as D3.js or Plotly. #PythonDeveloper #Neo4J #Kafka #DataIntegration #CDC #CI/CD #SoftwareEngineering #GraphDatabase #DataEngineering #HartfordJobs #ConnecticutTech #TechJobs #PythonJobs #SoftwareDevelopment #DataProcessing #JobOpening #HiringNow #DeveloperJobs #TechCareer #Programming #SoftwareTesting #UnitTests #Documentation #AgileMethodology #USjob
To view or add a comment, sign in
-
Hello #linkedinfamily 𝐖𝐞'𝐫𝐞 𝐇𝐢𝐫𝐢𝐧𝐠: 𝐓𝐞𝐜𝐡𝐧𝐢𝐜𝐚𝐥 𝐒𝐩𝐞𝐜𝐢𝐚𝐥𝐢𝐬𝐭 𝐄𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞: 8+ years 𝐋𝐨𝐜𝐚𝐭𝐢𝐨𝐧: Remote, India Immediate joiners preferred. 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬: ~Design, develop, and maintain complex big data pipelines using technologies such as Kafka, Apache Storm, or Spark Streaming. ~Implement and manage distributed caching solutions with Hazelcast or similar technologies to enhance application performance and scalability. ~Utilize NoSQL databases like MongoDB for storing and managing extensive datasets. ~Develop and maintain scalable search engines using Elasticsearch or Solr for efficient data retrieval and analysis. ~Collaborate with developers and data scientists to understand data requirements and translate them into technical solutions. ~Participate in code reviews to ensure adherence to coding standards and best practices. ~Troubleshoot and debug complex technical issues related to big data systems. Stay updated with the latest trends and technologies in big data and distributed systems. 𝐊𝐞𝐲 𝐀𝐭𝐭𝐫𝐢𝐛𝐮𝐭𝐞𝐬 𝐚𝐧𝐝 𝐐𝐮𝐚𝐥𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬: ~Senior developer experience with proficiency in programming languages like Java, Python, or Scala. ~Profound knowledge of big data concepts and technologies, with hands-on experience in Kafka, Storm, Hazel cast/Distributed Cache Management, ~MongoDB, Elasticsearch, or Solr (specify the most relevant technology). ~Experience in designing and implementing distributed systems. ~Strong understanding of database concepts and NoSQL technologies. ~Familiarity with cloud platforms like AWS, Azure, or GCP (optional but advantageous). ~Excellent problem-solving and analytical skills. ~Ability to work independently as well as collaboratively within a team. ~Strong communication and collaboration skills. If interested, kindly share your resume with Anjali at 𝐚𝐧𝐣𝐚𝐥𝐢@𝐦𝐚𝐜𝐭𝐨𝐬𝐲𝐬.𝐜𝐨𝐦. #ImportantNote: I am not the hiring person, Just sharing this information to help job seekers. Please do not direct message me for this job opportunity #ImportantNote📢 Note: If any company asks money for job don't pay, it might be fake 🌀🌀 Mahesh s 📉📊🌀🌀 •••••••••••••••••••••••••••••••••••••••••••••••••••••• 👆🏻For More Such Content Follow👆🏻 Commenting/Re-posting for better reach to people following my network. If you do not want to miss daily updates on genuine job posts, you can follow/Connect me Mahesh s Instead of commenting 'Interested', send a resume on the given email ID. 📢 Join Our Channels for More Opportunities: [Telegram](https://t.me/ITjobsh) #Hiring #TechnicalSpecialist #RemoteWork #India #Contract #BigData #Kafka #ApacheStorm #SparkStreaming #Hazelcast #NoSQL #MongoDB #Elasticsearch #Solr #CodeReview #Troubleshooting #DistributedSystems #CloudPlatforms #AWS #Azure #GCP #SeniorDeveloper #Java #Python #Scala #ProblemSolving #Analytics #Communication #Collaboration #ImmediateJoiners
To view or add a comment, sign in
-
ZettaLogix Hiring! Job Title: TITLE: BIG DATA ENGINEER Location: Plano,tx (Hybrid) The Engineer III role plans, designs, develops, and tests high-quality, innovative, and fully performing software systems or applications for software enhancements and new products. Key responsibilities include: •Contribute to the full software development life cycle •Write maintainable, extensible, tested code, while complying with coding standards •Produce specifications and determine operational feasibility •Continuously Integrate and deliver software components into a fully functional software system •Facilitate end-to-end user testing with customers •Troubleshoot, debug, and upgrade existing systems CAREER LEVEL SUMMARY •Proficiency: Fully mastered in immediate function/domain and has developed competent skills in complimentary functions or domains. Is able to train junior members in mastered domain knowledge. •Direction: Is largely autonomous, working on a day to day basis without supervision or support. Occasionally checks in with manager for questions, direction. Provides support or direction to more junior members. •Business Focus: Understands TCNA's business model, as well as the specific roadmap of assigned product or function. Understands the interconnectedness of business systems, products, and/or technologies. Understands the needs of the customer, and approaches work with a desire to exceed customer expectations. Shows basic understanding of technology costs and validates with Manager on impact of choices when unsure. •Growth Mindset: Exhibits strong growth mindset, approaches feedback and constructive criticism eagerly, and actively implements plans for change. Tolerant to organizational turbulence. Positively contributes to team and organizational culture. Raises concerns constructively. QUALIFICATIONS •8+ years of experience •Experience in building streaming and batch data pipelines using Big Data technologies (Spark, Flink, Kinesis, Kafka, etc) on large-scale unstructured data sets •AWS experience developing applications for cloud platforms such as EC2, Beanstalk, EKS, and/or Lambda •Kubernetes experience in developing, deploying, and orchestrating micro-services •Experience developing applications within Docker containers •Experience with infrastructure-as-code tools such as Terraform or CloudFormation •Expertise in multiple languages such as Java/Kotlin/Scala •Have used one of the common Java frameworks: Spring, Spring Boot, Quarkus or similar, and any of the Java Persistence API and JDBC implementations •Understanding of concepts regarding security, privacy, performance, etc #bigdataengineer #Spark #Flink #kinesis #kafka #EC2 #Beanstalk #EKS #Lambda #Kubernetes #Docker #Terraform #Java #Kotlin #Scala #Spring #api #JDBC You can reach me at: shivani@zettalogix.com you can reach me at 732-359-2830.
To view or add a comment, sign in
-
Discover our new #JobOpportunities in the #DataEngineering sector. 🔍 Our clients, with the support of Open Search Group, are looking for 9 different positions in Data Engineering Field. Learn more about the positions and skills required in the following carousel ⤵️ Are you interested in one or more positions? Write to hr@opensearchgroup.com attaching your CV and as the subject of the email type "Spontaneous application Data Engineering" 📧 #Python #Hadoop #Linux #Java #Springboot #MongoDB #ETL #AWS #Azure #Pandas #Git #Terraform #Jenkins #Spark #Kafka #GoogleCloud #AWS #Azure #Recruitment #OS_net #Headhunting #JobOpportunity
To view or add a comment, sign in
-
Discover our new #JobOpportunities in the #DataEngineering sector. 🔍 Our clients, with the support of Open Search Group, are looking for 15 different positions in Data Engineering Field. Learn more about the positions and skills required in the following carousel ⤵️ Are you interested in one or more positions? Write to hr@opensearchgroup.com attaching your CV and as the subject of the email type "Spontaneous application Data Engineering" 📧 #Python #Hadoop #Linux #Java #Springboot #MongoDB #ETL #AWS #Azure #Pandas #Git #Terraform #Jenkins #Spark #Kafka #GoogleCloud #AWS #Azure #Recruitment #OS_net #Headhunting #JobOpportunity
To view or add a comment, sign in
-
Discover our new #JobOpportunities in the #DataEngineering sector. 🔍 Our clients, with the support of Open Search Group, are looking for 15 different positions in Data Engineering Field. Learn more about the positions and skills required in the following carousel ⤵️ Are you interested in one or more positions? Write to hr@opensearchgroup.com attaching your CV and as the subject of the email type "Spontaneous application Data Engineering" 📧 #Python #Hadoop #Linux #Java #Springboot #MongoDB #ETL #AWS #Azure #Pandas #Git #Terraform #Jenkins #Spark #Kafka #GoogleCloud #AWS #Azure #Recruitment #OS_net #Headhunting #JobOpportunity
To view or add a comment, sign in
-
Modern Data Engineering RoadMap - 2024 Are you passionate about pursuing a Data Engineer career? Here is a well-detailed roadmap for becoming a data engineer in 2024. Stage 1: Mastering Data Engineering Fundamentals. Here, start by developing an in-depth understanding of what data engineering entails and establish a robust programming foundation. You can learn SQL and any of the following programming languages: Python, Scala, C++, or Java. Stage 2: Build hands-on experience in cloud computing Gain practical experience with leading cloud platforms such as AWS, Azure, or GCP. Learn to provision resources, manage storage, and deploy applications in a cloud environment. Stage 3: Explore distributed computing frameworks Gain knowledge in Apache Hadoop, Apache Kafka, Apache Flink, and Apache Spark. Understand their architecture and how they enable the processing of large datasets across clusters. Stage 4: Data warehouses and stream data processing Develop your skills in batch and streaming data processing. Start using tools like Apache Hive or Amazon Redshift for efficient data analysis. Stage 5: Dive into testing NoSQL databases and workflow orchestration tools. Explore NoSQL databases like MongoDB or Cassandra and learn best practices for testing and ensuring data integrity. Master workflow orchestration tools such as Apache Airflow and Prefect. Understand how to design, schedule, and monitor complex data workflows. For more career consulting follow D Global Consultants. #onlinecourses #dataengineering #dataengineerjobs #dataengineers #dataengineer
To view or add a comment, sign in