Getting AI to work with you

Getting AI to work with you

Those who have worked with me will recognise when I call myself the "Chief Idiot". I don't use this term to denigrate myself. No, indeed, I use this term to explain a strength(?) that I (think I) have: being able to simplify things.

One's ability to simplify a concept or a challenge presents an excellent opportunity to identify the most expedient solution.

Simplifying the definition of a problem can be very useful in eliciting the right responses from others that help all parties get to a desired solution quickly and effectively.

I decided to try this skill out with #AI - specifically with #Bing Chat - in working on a project that I just started in the area of search. Below is an excerpt of my conversation with Bing Chat. If you look carefully, you will see evidence of the following:

  • Chatting with AI delivers value because AI has no vested interests in the outcome of the chat.
  • Chatting with AI can also be a useful way to clarify what you understand and refine the solution options that are available to you.

I believe that one of the skills we will all need to develop (or get better at) is the Skill of communicating properly with AI to help us get better results from our work.

I foresee a future where we will be booking meeting slots with AI to discuss problems and solutions.

Do let me know what you think of the conversation below...

(Note: the lines in bold are from me).

From your answer, is it accurate to conclude that python is one of the best tools to use in building code that does webscraping?


Python is a popular language for web scraping because it has many libraries that make it easy to extract data from websites. However, there are other languages and tools that can also be used for web scraping. It ultimately depends on the specific needs and preferences of the person doing the scraping.


Apart from python what other languages are popular for webscraping?

Apart from Python, other popular languages for web scraping include Ruby, Node.js, Golang, Perl, PHP, C#, C & C++, Java and Rust1. Each language has its own strengths and weaknesses when it comes to web scraping. The best language for a developer to build a web scraper is the one that they are most familiar with2.


If I understand your responses, so far, is it accurate to state that webscraping involves a GET request to the website that stores the desired information? Thereafter the webscraping tool should be able to structure the data pulled from the source website and present it in a manner that is usable by the websraping user?


Yes, that is correct. Web scraping generally involves sending a GET request to the server to retrieve the desired information. The web scraping tool then structures the data pulled from the source website and presents it in a usable format for the user.


What is the difference between a GET request and when an HTTP request is typed in a browser?

A GET request is a type of HTTP request that is used to retrieve data from a server. When you type a URL into a web browser and hit enter, the browser sends a GET request to the server to retrieve the content of the web page.

In other words, when you type an HTTP request in a browser, it is usually a GET request that is being sent to the server to retrieve the desired information. However, there are other types of HTTP requests, such as POST, PUT, and DELETE, that can be used for different purposes.

Please correct this statement as you deem appropriate: A webscraping tool makes a get request to the website that it is trying to pull the information from. Thereafter it reads the content of the HTML response received. Then it uses in-built rules to decipher what pieces of data it wants to pull from the READ action. Based on how the webscraping tool is coded, it can store the data pulled and/or send it to another piece of code.

Your statement is mostly correct. A web scraping tool sends a GET request to the website it is trying to extract information from. It then reads the content of the HTML response received and uses its built-in rules to determine which pieces of data to extract. The extracted data can then be stored or sent to another piece of code for further processing, depending on how the web scraping tool is coded.

Can you share a piece of python code that can be used to make a GET statement to a website such that it performs the same action as a search on the source website based on a keyword that is supplied to the webscraping tool?

Here is an example of a Python script that uses the requests library to send a GET request to a website and perform a search based on a supplied keyword:

import requests


# Set the URL of the website to search

url = 'https://www.example.com/search'


# Set the search keyword

keyword = 'example'


# Set the query parameters for the GET request

params = {'q': keyword}


# Send the GET request to the website

response = requests.get(url, params=params)


# Check if the request was successful

if response.status_code == 200:

????# Get the content of the response

????content = response.text

????# Do something with the content

????# ...

else:

????print(f'An error occurred: {response.status_code}')


This script sends a GET request to the specified URL with the query parameter q set to the value of the keyword variable. The content of the response can then be processed further to extract the desired information.


Correct this statement as you deem appropriate: In order to be able to obtain the key word that would be supplied to the code above, it is possible to create another piece of python code that asks a user to type the desired key word.

Your statement is correct. You can create another piece of Python code that prompts the user to enter a keyword, which can then be passed to the web scraping script as the search keyword. Here is an example:

# Prompt the user to enter a keyword

keyword = input('Enter a keyword to search: ')


# Pass the keyword to the web scraping script

# ...


This code uses the input function to prompt the user to enter a keyword. The entered keyword is then stored in the keyword variable and can be passed to the web scraping script as the search keyword.

Correct this statement as you deem appropriate: for the two pieces of code to work properly the second piece of code would need to be saved as a .py file. Thereafter, the first piece of code will need to be updated to read the first piece of code for the keyword input, before it can perform the GET request on the website where the desired information is stored.

Your statement is partially correct. Both pieces of code can be saved in the same .py file or in separate .py files. If they are saved in separate files, the file containing the web scraping script would need to be updated to import the keyword from the file containing the user input script.

Here is an example of how the two scripts can be combined into a single .py file:

import requests


# Prompt the user to enter a keyword

keyword = input('Enter a keyword to search: ')


# Set the URL of the website to search

url = 'https://www.example.com/search'


# Set the query parameters for the GET request

params = {'q': keyword}


# Send the GET request to the website

response = requests.get(url, params=params)


# Check if the request was successful

if response.status_code == 200:

????# Get the content of the response

????content = response.text

????# Do something with the content

????# ...

else:

????print(f'An error occurred: {response.status_code}')


This script combines both pieces of code into a single script that prompts the user to enter a keyword and then uses that keyword to perform a search on the specified website.

要查看或添加评论,请登录

Gregory Ubigen的更多文章

  • Clean Content is (still) king!

    Clean Content is (still) king!

    The top 5 most popular Youtube creators in 2024 are: MrBeast- Over 300 million subscribers Kids Diana Show - Around…

  • Will AI Cancel (human) Celebs?

    Will AI Cancel (human) Celebs?

    In a recent turn of events, Jonathan Majors, once a rising star in Hollywood, found himself at the center of a…

  • Your FAQ Section is Broken. AI Can Fix It.

    Your FAQ Section is Broken. AI Can Fix It.

    FAQ sections serve an important purpose for most organisations. If implemented well, FAQ sections can limit the amount…

  • Quantum Avengers Unite!: AI's role in Risk Mitigation

    Quantum Avengers Unite!: AI's role in Risk Mitigation

    Risk Management is one of the most thankless jobs in Payments. You only get noticed when things go wrong, and losses…

    3 条评论
  • Speak Bank to Me: How AI can translate Open Banking's "Babel"

    Speak Bank to Me: How AI can translate Open Banking's "Babel"

    "A stitch in time..

    3 条评论
  • Ok Google, Please change the search UX

    Ok Google, Please change the search UX

    I am sure that I am not the only one who uses Perplexity more than Google nowadays. If you ask me, the new verb for…

  • Can #AI help you write better specs?

    Can #AI help you write better specs?

    The job of a product manager is very simple (so said no product manager, ever!): Identify a problem or an opportunity…

  • You should NOT ignore #AI

    You should NOT ignore #AI

    I just saw the "60 Minutes" episode (below) where Google demonstrated some of the work that are currently engaged in…

  • Product Guy, meet your new ally: #AI

    Product Guy, meet your new ally: #AI

    Even if you have been living under a rock in the last couple of months, you can't have missed the buzz about #ChatGPT…

  • Should you (really) "Speak your truth"?

    Should you (really) "Speak your truth"?

    Speak your truth!” This has become a common term that it is being used as an excuse to affirm thought patterns that may…

社区洞察

其他会员也浏览了