Big Data Engineer
Company: Ascension, WI
Posted on: October 10, 2018
Big Data Engineer Job ID: 272813 Big Data Engineer Austin, Texas Regular / DayAdditional Job Information Title: Big Data Engineer City, State: St Louis MO, Austin TX, Nashville, TN, Indianapolis Location: TXAUS 7715 Chevy Chase Bldg Department: ESD Data Insights Additional Job Details: FT, Days About Us Ascension Technologies is one of the nation's largest healthcare information technology services organizations.We provide Ascension and its subsidiaries low-cost, high-value IT infrastructure and software application services that: Support rapid and effective clinical decision making Improve efficiency and care transitions Foster information sharing across the continuum of care Make knowledge and data actionable, leading to improved patient outcomesJob Description Job Summary: The Data Architect ensures that the data assets of an organization are supported by its information technology architecture. Responsible for construction and development of "large-scale hadoop data processing systems" The Big Data Engineer must have considerable expertise in data warehousing and NoSQL technologies. Requires coding expertise with Python, Scala, and hive SQL languages. Must be able to implement enterprise big data architecture designs, and will work closely with the rest of the Analytics team, IT, and internal business partners to identify, evaluate, design, and implement big data solutions, structured and unstructured, public and proprietary data. The Big Data Engineer will work iteratively on the distributed Hadoop platform to deliver data solutions to stakeholders. Responsibilities: Recommends strategies based on changing business needs for supported area. Advises management on approaches to optimize business success. Evaluates applicability of leading edge technologies and uses information to significantly influence future business strategy. Analyzes complex business and competitive issues and discerns the implications for systems support. Identifies, defines, directs, and performs project issue analysis to resolve issues, including analysis of the technical and economic feasibility of proposed data solutions. Designs projects with broad implication for the business and/or the future architecture, successfully addressing cross-technology and cross-platform issues. Selects tools and methodologies for projects. Negotiates terms and conditions with vendors. Develops partnerships with senior users to understand business needs and define future data requirements. Able to effectively communicate highly technical information to numerous audiences, including senior management, the user community, and less-experienced staff. Leads the development of effective networks of internal and external customers, suppliers, the technical community and consultants. Leads the organization's planning for the future of the data model and data architecture. Leads the continual redesign of the organization's data model and/or transition to a common model. Proposes and leads projects required to support the development or the organization's data infrastructure needs and provides intelligence on advances in database technologies. Desired Responsibilities: Experience as Hadoop Developer with sound knowledge in Hadoop ecosystem technologies. Hands on experience in developing and deploying enterprise based applications using major components in Hadoop ecosystem like Hadoop, YARN, Hive, Map Reduce, HBase, Flume, Scoop, Spark, Kafka, Oozie and Zookeeper. Excellent Programming skills at a higher level of abstraction using Scala and Spark. Hands on experience in Importing and exporting data from different databases like SQL, Oracle, Teradata into HDFS using Sqoop Strong experience working with real time streaming applications and batch style large scale distributed computing applications using tools like Spark, Kafka, Flume, MapReduce, Hive. Design Extract, Transform, Load (ETL) processes that populate HIVE databases. Population often includes data from multiple sources. ETL design frequently requires building HIVE tables, performing data conversion, creating calculated fields, designing data update routines, and building scheduled jobs. Managing and scheduling batch Jobs on a Hadoop Cluster using Oozie. Experience in managing and reviewing Hadoop Log files. Experienced using Sqoop to import data into HDFS from RDBMS and vice-versa. Hands on experience in Analysis, Design, Coding and Testing phases of Software Development Life Cycle (SDLC). Ability to work with different file formats like Avro, Parquet, and JSON. Ability to advise management on approaches to optimize for big data platform success. Understands database schemas and data flow diagrams. Demonstrated success documenting technical specifications. Identifies, analyzes, and troubleshoots problems in big data solutions. Understands source systems and the common business and technical keys Reviews and evaluates data output to ensure it conforms to technical requirements Collaborate in the planning, design, deployment, and testing of new or enhanced solutions. Ability to create detailed test plans for new development and maintain regression testing for existing functionality. Focused on building infrastructure and architecture for big data generation Able to effectively communicate highly technical information to numerous audiences, including management, the user community, and less-experienced staff. Able to Collaborate with Product Managers, Operations, and Data Architecture to produce the best possible end products Consistently communicate on status of project deliverables Consistently provide work effort estimates to management to assist in setting priorities Deliver timely work in accordance with estimates Solve problems as they arise and communicate potential roadblocks to manage expectations Adhere strictly to all security policies Qualifications Education: Bachelor's degree preferred or equivalent experience. Desired Education: Master level engineering degree preferred Work Experience: Seven years of experience preferred. Desired Work Experience: Four to seven years of experience preferred. Minimum number years of relevant experience: 2 Years Some of the minimum experience requirement may be met with Masters or other advanced degree Experience with healthcare data is desirable Coding experience with Python, Scala, and/or PySpark is required. Experience with big data technologies like HDFS, Spark, Impala, Hive, Shell scripting, and bash is required Equal Employment Opportunity Ascension Technologies is an EEO/AA Employer M/F/Disability/Vet. Please click the link below for more information. EEO is the Law Poster Supplement E-Verify Statement Ascension Technologies participates in the Electronic Employment Verification Program. Please click the E-Verify link below for more information. Share: ->
Keywords: Ascension, WI, Austin , Big Data Engineer, Engineering , Austin, Texas
Didn't find what you're looking for? Search again!