Remote Web Scraper (Python Data Engineer) Job at Taiyō
Applications are closed for this job. Click here to browse more jobs.

Web Scraper (Python Data Engineer)


Work from home

Start date
Starts Immediately
7 - 12 LPA
Apply By
7 Feb' 23
Posted 3 weeks ago
Fresher Job
539 applicants
About Taiyō
Taiyo is a Silicon Valley startup that aggregates, predicts, and visualizes the world's data so customers don't have to. We are a globally-distributed team with a focus on the infrastructure vertical. The Taiyo team was founded by an interdisciplinary group of experts from Stanford University's AI Institute, World Bank, International Monetary Fund, and UC Berkeley.
Activity on Internshala
Hiring since May 2017
109 opportunities posted
65 candidates hired
About the job
Key responsibilities:

1. Work on data sourcing
2. Use web scrapers (Beautifulsoup, selenium, etc.)
3. Manage the data normalization and standards validation
4. Parametrize and automate the scrapers
5. Develop and execute the processes for monitoring data sanity and checking for data availability and reliability
6. Understand the business drivers and build insights through data
7. Work with the stakeholders at all levels to establish current and ongoing data support and reporting needs
8. Ensure continuous data accuracy and recognize data discrepancies in systems that require immediate attention/escalation
9. Work and become an expert in the company's data warehouse and other data storage tools, understanding the definition, context, and proper use of all attributes and metrics
10. Create dashboards based on business requirements
11. Work on the distributed systems, scale, cloud, caching, CI/CD (continuous integration and deployment), distributed logging, data pipeline, REST API)

Who can apply:

1. Creativity & complex problem-solving skills
2. Exceptional and scalable web scraping skills
3. Passion and interest in doing ETL jobs
4. Good English speaking and communication skills
5. Ability to work with a global remote culture
6. Initiative and entrepreneurship skills
7. Experience with microservices architecture and writing REST APIs
8. Knowledge of Kubernetes, Docker and Airflow
9. Prior experience with Python, Django and Gunicorn
10. Independent work ethic with an ability to work in a fast-paced environment
11. We are looking for data engineers with Python scripting practices and scalable web scraping skills, including monitoring ingestion of data, adhering to data standards, and solid knowledge of data and cloud workflow orchestration
Skill(s) required
Amazon Web Server (AWS) Data Science Data Structures Docker Python REST API
Duration: 5 months
Salary during probation: 15,000 - 50,000 /month
After probation:

Annual CTC: 7 - 12 LPA (All fixed)

5 days a week
Number of openings

Sign up to continue


By signing up, you agree to our Terms and Conditions.