Automated Twitter Data Scraping and Storage to Google Sheets
detail.loadingPreview
This n8n workflow automates Twitter data extraction, allowing you to scrape tweets by username or search query using Apify's tweet-scraper and store the results directly into Google Sheets. It efficiently captures tweet details and engagement metrics.
🚀Ready to Deploy This Workflow?
About This Workflow
Overview
This n8n workflow demonstrates how to automate the scraping of Twitter data and its subsequent storage into Google Sheets. It leverages the power of Apify's tweet-scraper actor to fetch tweets based on either a specific username or a search query. The workflow is designed to be flexible, allowing users to define the input parameters for scraping and then efficiently append the extracted tweet data, including engagement metrics, to designated sheets in Google Sheets.
Key Features
- Scrapes tweets by a specified username.
- Scrapes tweets based on a given search query.
- Utilizes Apify's
tweet-scraperactor for robust data extraction. - Filters scraped tweets for verified users and Twitter Blue accounts.
- Sorts search results by "Top" tweets (most engaging).
- Appends scraped tweet data to Google Sheets, including ID, URL, text, timestamp, and engagement metrics (retweets, replies, likes, views).
- Supports dynamic input for username and tweet limits via n8n's form trigger.
How To Use
- Set up Credentials: Ensure you have configured API credentials for Apify and Google Sheets within n8n.
- Configure Form Trigger: Set up the
tweets_by_usernametrigger to define the 'Username' and 'Limit' for scraping. - Configure HTTP Request (Username): In the
scrape_username_tweetsnode, ensure the 'Username' and 'Limit' are correctly mapped from the form trigger's output. - Configure Google Sheets (Username): In the
append_username_tweetsnode, verify the 'Document ID' and 'Sheet Name' for where the scraped data will be stored. Map the output fields from the HTTP request to the corresponding Google Sheets columns. - Configure HTTP Request (Search Query): For scraping by search query, configure the
scrape_search_tweetsnode with your desired 'searchTerms', 'maxItems', and other relevant parameters. - Configure Google Sheets (Search Query): In the corresponding Google Sheets node for search queries, set the 'Document ID' and 'Sheet Name' and map the relevant fields.
- Activate Workflow: Save and activate the workflow to start collecting Twitter data.
Apps Used
Workflow JSON
{
"id": "7e265a17-20df-4aec-857e-3cc172be5c00",
"name": "Automated Twitter Data Scraping and Storage to Google Sheets",
"nodes": 0,
"category": "Data Scraping",
"status": "active",
"version": "1.0.0"
}Note: This is a sample preview. The full workflow JSON contains node configurations, credentials placeholders, and execution logic.
Get This Workflow
ID: 7e265a17-20df...
About the Author
Crypto_Watcher
Web3 Developer
Automated trading bots and blockchain monitoring workflows.
Statistics
Verification Info
Related Workflows
Discover more workflows you might like
Hacker News "Who is Hiring?" Scraper
Scrapes monthly Hacker News 'Who is Hiring?' posts to extract and structure job data.