Sample Resume: Big Data Engineer

Raj Patel
000.555.1212
Providence, Rhode Island
www.portfoliolink
rpatel@email.com


Big Data Engineer

Designer, builder and manager of Big Data infrastructures

A collaborative engineering professional with substantial experience designing and executing solutions for complex business problems involving large scale data warehousing, real-time analytics and reporting solutions. Known for using the right tools when and where they make sense and creating an intuitive architecture that helps organizations effectively analyze and process terabytes of structured and unstructured data.


Competency Synopsis 

Data Warehousing
Proven history of building large-scale data processing systems and serving as an expert in data warehousing solutions while working with a variety of database technologies. Experience architecting highly scalable, distributed systems using different open source tools as well as designing and optimizing large, multi-terabyte data warehouses. Able to integrate state-of-the-art Big Data technologies into the overall architecture and lead a team of developers through the construction, testing and implementation phase.

Databases and Tools: MySQL, MS SQL Server, Oracle, DB2, NoSQL: HBase, SAP HANA, HDFS, Cassandra, MongoDB, CouchDB, Vertica, Greenplum, Pentaho and Teradata. 

Data Analysis
Consulted with business partners and made recommendations to improve the effectiveness of Big Data systems, descriptive analytics systems, and prescriptive analytics systems. Integrated new tools and developed technology frameworks/prototypes to accelerate the data integration process and empower the deployment of predictive analytics. Working knowledge of machine learning and/or predictive modeling.

Tools: Hive, Pig and Hadoop Streaming, MapReduce, R, SPSS, SAS, Weka, MATLAB. 

Data Transformation
Experience designing, reviewing, implementing and optimizing data transformation processes in the Hadoop and Informatica ecosystems. Able to consolidate, validate and cleanse data from a vast range of sources – from applications and databases to files and Web services.

Tools: Ascential Software’s DataStage and Informatica Corp.’s PowerCenter RT; Pentaho Kettle, DataStage, SSIS, and Scripting; Linux/UNIX Commands.

Data Collection
Capable of extracting data from an existing database, Web sources or APIs. Experience designing and implementing fast and efficient data acquisition using Big Data processing techniques and tools.

Tools: APIs and SDKs, RESTful Interfaces 

Professional Experience 

RI Cablevision     2012 to present
Data Architect/Big Data Engineer

Attained 20% growth in revenue and customers over the last two years by analyzing business needs, collaborating with stakeholders and designing a new data warehouse. Then, successfully lead several data extraction, warehousing and analytics initiatives that reduced operating costs and created customized programming options.

Highlights:

  • Designed a large data warehouse using star schema, flow-flake. View my tips on SlideShare (insert link)
  • Designed and developed Big Data analytics platform for processing customer viewing preferences and social media comments using Java, Hadoop, Hive and Pig.
  • Integrated Hadoop into traditional ETL, accelerating the extraction, transformation, and loading of massive structured and unstructured data. Read my ETL whitepaper (insert link)
  • Loaded the aggregate data into a relational database for reporting, dash boarding and ad-hoc analyses, which revealed ways to lower operating costs and offset the rising cost of programming.

Election Data Services     2010 to 2012
Data Architect/Big Data Engineer     
Helped this pollster gain credibility and garner a reputation for reliable predictions by creating a Big Data framework and deploying tools to capture, transform, analyze and store terabytes of structured and unstructured data.

Highlights:

  • Installed and configured Apache Hadoop, Hive and Pig environment on the prototype server.
  • Configured SQL database to store Hive metadata.
  • Loaded unstructured data into Hadoop File System (HDFS).
  • Created ETL jobs to load Twitter JSON data and server data into MongoDB and transported MongoDB into the Data Warehouse.
  • Created reports and dashboards using structured and unstructured data.

Cutting Edge Data Management Solutions     2006 to 2010
Data Engineer
Database and Enterprise Architect
Responsible for the architecture and design of the data storage tier for this third-party provider of data warehousing, data mining and analysis.

Bank of New England, Data Warehouse Developer     2004 to 2006
Built business intelligence infrastructure to support the acquisition and maintenance of private banking customers including the storage and retrieval of market intelligence and the execution of predictive and prescriptive analytics.

Additional Experience in Computer Programming and Database Administration     2000 to 2004

Education and Training 

Brown University, Bachelor’s Degree in Computer Science and Mathematics
Big Data University, Hadoop and SQL Training
Kimball Group, ETL Architecture Training Courses
TDWI, Courses in Data Warehousing and Architecture

For a complete list of training courses, work experience and tools visit (insert link to personal website here)