Automating Fine-tuning Dataset Creation using Multimodal Generative AI Models
Licensed image from Shutterstock.com

Automating Fine-tuning Dataset Creation using Multimodal Generative AI Models

Leveraging the latest multimodal models from Anthropic to automate the tedious and often error-prone process of creating labeled datasets for training and fine-tuning Generative AI models.

Introduction

Recently, I co-developed and presented a talk on fine-tuning generative AI foundation models, Build a Personalized Avatar with Amazon Titan Image Generator, with my peer, Deepti Tirumala , at the 2024 AWS New York Summit. The attendance for our talk and the interest from the community afterward were significant.

2024 AWS New York Summit talk
Sample output from a fine-tuned Amazon Titan Image Generator model

In researching the topic for the talk, we fine-tuned dozens of copies of the Amazon Titan Image Generator foundation model and similar text-to-image models from Stability AI, including Stable Diffusion XL (SDXL) and Stable Diffusion 3 Medium (SD3). Fine-tuning was accomplished on several AI platforms, including Amazon Bedrock, Amazon SageMaker, Civitai, and locally with ComfyUI and Stable Diffusion web UI (A1111). We experimented with different dataset sizes, image preparation techniques, captioning strategies, and hyperparameters based on the model, fine-tuning technique, and platform.

Automating Dataset Preparation

Regardless of model type, preparing the labeled dataset for fine-tuning is the first and often the most labor-intensive step. Preparing even a small dataset of 20–100 images and captions for PEFT (Parameter-Efficient Fine-Tuning) with LoRA (Low-Rank Adaptation) can be tedious and prone to error and human bias. Creating consistent and effective captions or tags, assigning them to the right images, and writing them all in formats such as comma-delimited text, JSON, and JSONL (JSON Lines) leaves room for error. The process is similar for training and fine-tuning text-to-text, embedding, and other model types.

One of the techniques that helped accelerate the fine-tuning research and experimentation for our talk was leveraging the latest generation of multimodal models to assist with constructing the fine-tuning data. After considerable prompt engineering and optimization, we found the newly released Anthropic Claude 3.5 Sonnet multimodal model performed very well at this task. According to Anthropic, “Claude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations, with the speed and cost of our mid-tier model, Claude 3 Sonnet.

Chart courtesy of Anthropic

Amazon Bedrock Fine-tuning Data Generation

Industry-leading current-generation foundation models, like those from Anthropic, demonstrate advanced comprehension of complex prompt-based instruction sets. These models can generate accurate image captions and descriptive tags, as in our fine-tuning example, and they can also precisely format the output, minimizing or eliminating the need to post-process the data to prepare it for training.

Workflow used to fine-tune models for the talk

As part of the fine-tuning examples featured in our talk, we used small datasets of 15–25 images of different subjects from Shutterstock — human models, 3D objects, architectural subjects, vehicles, and so forth. Shutterstock has “a library of over 450 million images, with 200,000 more added every day.” Using Claude 3.5 Sonnet, we could generate a fine-tuning data file that required little to no post-processing and was ready for fine-tuning.

In our first example, demonstrated in the talk, we used individual headshots from a?facial expression grid?of a female model licensed from Shutterstock to fine-tune the Amazon Titan Image Generator foundation model. These collages of similar images are perfect for getting started with fine-tuning.

Licensed image example from Shutterstock used for fine-tuning

Fine-tuning data for the Amazon Titan Image Generator image generation model on Amazon Bedrock needs to be in the following JSONL format:

{"image-ref": "s3://bucket/path/to/image001.png", "caption": "<prompt text>"}
{"image-ref": "s3://bucket/path/to/image002.png", "caption": "<prompt text>"}
{"image-ref": "s3://bucket/path/to/image003.png", "caption": "<prompt text>"}        

Along with the images, we used the following prompt to guide Claude 3.5 Sonnet’s output:

You are an expert at writing succinct, factual, non-biased descriptions of images.
Describe the facial expressions and emotions shown in each image of the young woman.
Do not attempt to identify or name specific individuals in images.
The order is of images in the grid left to right, top to bottom.
Return a "caption" corresponding to each image.
Each "caption" must start with the phrase "SusanJones the woman". For example, "SusanJones the woman smiling warmly with her teeth showing"
Output JSONL format in the following format: {"image-ref": "s3://sagemaker-us-east-1-123456789012/titan_image_finetune_dataset_01/images/susanjones_01.jpg", "caption": "SusanJones the woman smiling warmly with her teeth showing"}
Increment the numeric suffix of each "image-ref". For example, susanjones_01.jpg, susanjones_02.jpg, susanjones_03.jpg
Only output the JSONL data; do not include an additional output, preamble, explanation, or chain of thought.        

Although Claude 3.5 Sonnet is now available through Amazon Bedrock, my access to some larger and newer foundation models and features on Amazon Bedrock is restricted as an AWS employee. Thus, for this post, I am using Anthropic Claude’s online chat interface, Talk with Claude, an AI assistant from Anthropic. Claude’s chat interface, workbench, and API console are convenient and easy to use, especially for quick experimentation. Subscribing to Anthropic’s Professional plan provides additional advantages, like early access to new features, higher usage limits, and priority bandwidth and availability.


Facial grid of fine-tuning image set and the text prompt in Anthropic Claude’s Chat UI

The JSONL-formatted response from Claude 3.5 Sonnet is ready to be used for fine-tuning the Amazon Titan Image Generator foundation model on Amazon Bedrock.

{"image-ref": "s3://sagemaker-us-east-1-123456789012/.../susanjones_01.jpg", "caption": "SusanJones the woman smiling warmly with teeth showing"}
{"image-ref": "s3://sagemaker-us-east-1-123456789012/.../susanjones_02.jpg", "caption": "SusanJones the woman with mouth wide open in excitement or surprise"}
{"image-ref": "s3://sagemaker-us-east-1-123456789012/.../susanjones_03.jpg", "caption": "SusanJones the woman grimacing with teeth clenched"}
{"image-ref": "s3://sagemaker-us-east-1-123456789012/.../susanjones_04.jpg", "caption": "SusanJones the woman frowning with a serious expression"}
{"image-ref": "s3://sagemaker-us-east-1-123456789012/.../susanjones_05.jpg", "caption": "SusanJones the woman with furrowed brows looking confused or skeptical"}        

The resulting fine-tuned Amazon Titan Image Generator model effectively generated images in Amazon Bedrock, which closely approximated the fine-tuning dataset.

Fine-tuned Amazon Titan Image Generator foundation model in Amazon Bedrock

For our talk’s second example, we used a different licensed Shutterstock image. This time, we passed individual images, as opposed to a single facial grid, to Claude 3.5 Sonnet for captioning. According to Anthropic’s documentation, “You can include multiple images in a single request (up to 5 for claude.ai and 20 for API requests). Claude will analyze all provided images when formulating its response. This can be helpful for comparing or contrasting images.

Individual image for fine-tuning and the text prompt in Anthropic Claude’s Chat UI

Again, the properly JSONL-formatted response from Claude 3.5 Sonnet is ready for fine-tuning.

{"image-ref": "s3://sagemaker-us-east-1-123456789012/.../johndoe_01.jpg", "caption": "JohnDoe the man looking straight ahead with a serious, focused expression"}
{"image-ref": "s3://sagemaker-us-east-1-123456789012/.../johndoe_02.jpg", "caption": "JohnDoe the man smiling broadly and giving a thumbs up gesture"}
{"image-ref": "s3://sagemaker-us-east-1-123456789012/.../johndoe_03.jpg", "caption": "JohnDoe the man frowning with a confused and skeptical expression while pointing"}
{"image-ref": "s3://sagemaker-us-east-1-123456789012/.../johndoe_04.jpg", "caption": "JohnDoe the man scowling with an angry expression and clenched fists"}
{"image-ref": "s3://sagemaker-us-east-1-123456789012/.../johndoe_05.jpg", "caption": "JohnDoe the man smiling warmly with a friendly and approachable expression"}        

Generating Alternate Dataset Formats

Each model, training technique, and platform may require different formats for training datasets. For example, to fine-tune Stable Diffusion models, such as SDXL 1.0, on OpenArt or Civitai, you would likely use LoRA (Low-Rank Adaptation of Large Language Models). According to Civitai, “LoRA models, known as Small Stable Diffusion models, incorporate minor adjustments into conventional checkpoint models. Typically, they are sized down by a factor of up to x100 compared to checkpoint models, making them particularly appealing for individuals who possess a vast assortment of models.” Training a LoRA model on Civitai can cost less than $1! Alternately, with Amazon SageMaker Studio or local training, you might choose LoRA or DreamBooth.

Civitai’s LoRA Trainer UI

If you caption your images for LoRA and DreamBooth fine-tuning methods, you can auto-caption (aka image-to-text) or manually pre-caption. Common methods for auto-captioning images include BLIP/BLIP-2 (Bootstrapping Language-Image Pre-training), OSCaR (Object-Semantics Aligned Pre-training), or fine-tuning your own custom image-to-text captioning model. However, models like Claude 3.5 Sonnet can automate the pre-captioning of images.

Let’s assume we are fine-tuning a model to produce images of a new type of sports car, which the base model was not trained initially on, using a method that accepts captions. We’ll start by generating a caption for this single sports car image from Pexels, using Claude 3.5 Sonnet:

You are an expert at writing succinct, factual, non-biased descriptions of images. Write a 1-2 sentence description of the image. Do not use phrases like "the image shows" or "appears to be". Just output the description; do not include additional output, preamble, explanation, or chain of thought.        

View from within Anthropic’s chat interface:

Pre-captioning works well for a single image, but how do we handle multiple images in the fine-tuning dataset? Again, we can rely on multimodal models like Claude 3.5 Sonnet, to caption our whole image dataset.

Sample of synthetic image dataset to be captioned and used for fine-tuning

Generating Automation Code

One technique is to ask the model to write a simple program in your choice of languages, such as Python, to create individual text files, each containing a caption for the corresponding image. You would then execute the resulting code within your own interpreter. Unlike?OpenAI’s Code Interpreter?or Agents for Amazon Bedrock's recently announced?Code interpretation?feature, Anthropic does not currently offer a sandboxed environment for inline code execution.

You are an expert at writing succinct, factual, non-biased captions for images.
Write a Python 3 script that will produce individual text files, one for each image, containing a caption of the associated image.
The order is of images in the grid left to right, top to bottom.
Captions should be 1-2 sentences in length for each image.
Do not use phrases like "the image shows" or "appears to be".
Increment the filename for each caption. For example, car_1.txt, car_2.txt, car_3.txt.
Just output the Python script; do not include additional output, preamble, explanation, or chain of thought.        

View from within Anthropic’s chat interface:

The resulting Python script was able to accomplish our task.

import os

descriptions = [
    "A blue Alfa Romeo sports car driving on a curvy road with trees in the background.",
    "A blue Alfa Romeo sports car speeding down a straight road surrounded by trees.",
    "A blue Alfa Romeo sports car on a highway with a blurred background.",
    "A blue Alfa Romeo sports car driving on a curved road with a guardrail visible.",
    "A dark gray Alfa Romeo sports car driving on a road with trees in the background.",
    "A green Alfa Romeo sports car driving on a highway with trees in the background.",
    "A dark green Alfa Romeo sports car driving on a road with trees and buildings in the background.",
    "A bright green Alfa Romeo sports car speeding down a road surrounded by trees.",
    "An orange Alfa Romeo sports car driving on a curved road with trees in the background.",
    "A red Alfa Romeo sports car driving on a winding road through a forested area.",
    "A red Alfa Romeo sports car speeding down a straight road with trees on both sides.",
    "A red Alfa Romeo sports car driving on a curved road with a blurred background of trees."
]

for i, description in enumerate(descriptions, 1):
    filename = f"car_{i}.txt"
    with open(filename, 'w') as file:
        file.write(description)

print(f"{len(descriptions)} description files have been created.")        

Finally, the resulting training files:

Generating Tags vs. Captions

Instead of writing natural language captions for images, some training techniques work better with lists of tags (aka keywords). We can easily modify our previous prompt to accomplish this task.

You are an expert at creating a list of consistent, accurate, and non-biased keywords to describe images.
Write a Python 3 script that will produce individual text files, one for each image, containing a comma-delimited list of keywords describing the associated image.
The order is of images in the grid left to right, top to bottom.
The comma-delimited list of keywords for each image should be between 10-20 keywords.
Increment the filename for each description. For example, car_1.txt, car_2.txt, car_3.txt.
Just output the Python script; do not include additional output, preamble, explanation, or chain of thought.        

Again, the resulting Python script was able to accomplish our task.

import os

def generate_keywords(image_number):
    common_keywords = ["Alfa Romeo", "sports car", "performance vehicle", "Italian design", "luxury automobile"]
    
    color_map = {
        1: "blue", 2: "blue", 3: "blue", 4: "blue", 5: "dark green", 6: "bright green",
        7: "dark green", 8: "bright green", 9: "orange-red", 10: "red", 11: "red", 12: "red"
    }
    
    specific_keywords = {
        1: ["driving", "motion blur", "curved road"],
        2: ["front three-quarter view", "daylight", "forest background"],
        3: ["highway", "speed", "clear day"],
        4: ["winding road", "dynamic angle"],
        5: ["straight road", "speed lines"],
        6: ["cornering", "tree-lined road"],
        7: ["urban setting", "street view"],
        8: ["aggressive stance", "muscular design"],
        9: ["sharp turn", "handling prowess"],
        10: ["acceleration", "power", "vivid color"],
        11: ["classic profile", "elegant lines"],
        12: ["action shot", "performance driving"]
    }
    
    keywords = common_keywords + [color_map[image_number]] + specific_keywords[image_number]
    return ", ".join(keywords)

def create_keyword_files():
    if not os.path.exists("keywords"):
        os.makedirs("keywords")
    
    for i in range(1, 13):
        filename = f"keywords/car_{i}.txt"
        with open(filename, "w") as file:
            file.write(generate_keywords(i))

if __name__ == "__main__":
    create_keyword_files()        

Note the efficiency at which Claude 3.5 Sonnet wrote the script so as not to repeat common tags such as “sports car” or “performance vehicle.” Of course, you can prompt the model also to list all the image’s tags inline:

import os

def create_keyword_file(filename, keywords):
    with open(filename, 'w') as f:
        f.write(','.join(keywords))

keywords_list = [
    ["Alfa Romeo", "sports car", "blue", "driving", "road", "speed", "performance", "Italian", "luxury", "aerodynamic", "sleek", "modern", "powerful", "stylish", "dynamic"],
    ["Alfa Romeo", "sports car", "blue", "driving", "road", "speed", "performance", "Italian", "luxury", "daylight", "motion", "curve", "elegant", "aggressive", "streamlined"],
    ["Alfa Romeo", "sports car", "blue", "driving", "highway", "speed", "performance", "Italian", "luxury", "motion blur", "power", "agile", "sporty", "premium", "fast"],
    ["Alfa Romeo", "sports car", "blue", "driving", "road", "speed", "performance", "Italian", "luxury", "cloudy", "cornering", "handling", "responsive", "aerodynamic", "precise"],
    ["Alfa Romeo", "sports car", "dark green", "driving", "road", "speed", "performance", "Italian", "luxury", "motion", "powerful", "sleek", "aggressive", "dynamic", "sporty"],
    ["Alfa Romeo", "sports car", "green", "driving", "road", "speed", "performance", "Italian", "luxury", "daylight", "vibrant", "stylish", "modern", "nimble", "eye-catching"],
    ["Alfa Romeo", "sports car", "dark green", "driving", "road", "speed", "performance", "Italian", "luxury", "trees", "elegant", "sophisticated", "powerful", "aerodynamic", "premium"],
    ["Alfa Romeo", "sports car", "bright green", "driving", "road", "speed", "performance", "Italian", "luxury", "daylight", "vivid", "striking", "aggressive", "dynamic", "bold"],
    ["Alfa Romeo", "sports car", "red", "driving", "road", "speed", "performance", "Italian", "luxury", "sunlight", "vibrant", "passionate", "sporty", "eye-catching", "powerful"],
    ["Alfa Romeo", "sports car", "red", "driving", "road", "speed", "performance", "Italian", "luxury", "motion", "dynamic", "sleek", "aggressive", "stylish", "fast"],
    ["Alfa Romeo", "sports car", "red", "driving", "road", "speed", "performance", "Italian", "luxury", "trees", "powerful", "elegant", "sporty", "premium", "aerodynamic"],
    ["Alfa Romeo", "sports car", "red", "driving", "road", "speed", "performance", "Italian", "luxury", "motion blur", "dynamic", "aggressive", "striking", "bold", "fast"]
]

for i, keywords in enumerate(keywords_list, 1):
    create_keyword_file(f"car_{i}.txt", keywords)

print(f"Created {len(keywords_list)} keyword files.")        

Scripting Amazon Bedrock

We can go one step further with our generative AI automation approach and ask the model to write a script that prompts Claude 3.5 Sonnet directly via Anthropic’s API or with the Amazon Bedrock or Google Vertex API. Below is an example of prompting Claude 3.5 Sonnet to write a Python script that will prompt Claude 3.5 Sonnet via Amazon Bedrock’s Messages API to create caption files for each of our images. As a new model, Claude 3.5 Sonnet appears familiar with Amazon Bedrock’s Messages API; otherwise, this method would not be possible.

You are an expert at writing succinct, factual, non-biased captions for images.
Write a Python 3 script that will use the Amazon Bedrock Messages API for Anthropic Claude 3.5 Sonnet and produce individual text files, one for each image in the "images" directory, containing a caption of the associated image.
Descriptions should be 1-2 sentence caption for each image.
Start all captions with "zsrtcar the car". For example, "zsrtcar the car driving on a curvy road with trees in the background. Red body color with black rims."
Do not use phrases like "the image shows" or "appears to be".
Do not mention product names.
The filename for each text file must be the same as the corresponding image, but with a .txt extension.
Place the text files in the "image_captions" directory.
Do not forget to import json and base64 into the script, output a success message after each text file is created, and catch any errors.
Just output the Python script; do not include additional output, preamble, explanation, or chain of thought.        

The resulting Python script required minimal modification to accomplish our task.

import os
import json
import base64
import boto3


def encode_image(image_path):
    with open(image_path, "rb") as image_file:
        return base64.b64encode(image_file.read()).decode("utf-8")


def get_image_description(client, image_base64):
    body = json.dumps(
        {
            "anthropic_version": "bedrock-2023-05-31",
            "max_tokens": 300,
            "messages": [
                {
                    "role": "user",
                    "content": [
                        {
                            "type": "image",
                            "source": {
                                "type": "base64",
                                "media_type": "image/jpeg",
                                "data": image_base64,
                            },
                        },
                        {
                            "type": "text",
                            "text": "Provide a 1-2 sentence description of this image. Start with 'zsrtcar the car'. Do not use phrases like 'the image shows' or 'appears to be'. Do not mention specific product names.",
                        },
                    ],
                }
            ],
        }
    )

    response = client.invoke_model(
        body=body, modelId="anthropic.claude-3-5-sonnet-20240620-v1:0"
    )
    response_body = json.loads(response.get("body").read())
    return response_body["content"][0]["text"]


def main():
    client = boto3.client("bedrock-runtime")
    image_dir = "images"
    caption_dir = "image_captions"

    os.makedirs(caption_dir, exist_ok=True)

    for filename in os.listdir(image_dir):
        if filename.lower().endswith((".png", ".jpg", ".jpeg")):
            try:
                image_path = os.path.join(image_dir, filename)
                image_base64 = encode_image(image_path)
                description = get_image_description(client, image_base64)

                caption_filename = os.path.splitext(filename)[0] + ".txt"
                caption_path = os.path.join(caption_dir, caption_filename)

                with open(caption_path, "w") as f:
                    f.write(description)

                print(f"Successfully created caption for {filename}")
            except Exception as e:
                print(f"Error processing {filename}: {str(e)}")


if __name__ == "__main__":
    main()        

Generating Tags vs. Captions

Again, we can do the same for image tags versus captions:

You are an expert at creating consistent, accurate, and non-biased tags to describe images.
Write a Python 3 script that will use the Amazon Bedrock Messages API to call Anthropic Claude 3.5 Sonnet and produce individual text files, one for each PNG or JPEG image in the "images" directory, containing a comma-delimited list of descriptive tags describing the associated image.
The comma-delimited list for each PNG or JPEG image should contain 10-20 tags.
Do not mention product names.
All tags must be lowercase.
Always start the comma-delimited list with the unique tag "zsrtcar the car". For example, "zsrtcar the car, metallic blue, sports car, high-performance, driving, winding road, blue skys, fields, blurred background"
Never end the list with a period. The tag for the color of the subject should be separate. For example, "light blue", "bright red", metallic green", "white".
The filename for each text file must be the same as the corresponding image, but with a .txt extension.
Place the text files in the "image_captions" directory.
Output a success message after each text file is created, and catch any errors.
Just output the Python script; do not include additional output, preamble, explanation, or chain of thought.        

The resulting Python script required minimal modification to accomplish our task.

import os
import boto3
import json
from PIL import Image
import base64
import io

# Initialize the Bedrock client
bedrock = boto3.client(service_name="bedrock-runtime")


def encode_image(image_path):
    with Image.open(image_path) as img:
        buffered = io.BytesIO()
        img.save(buffered, format="PNG")
        return base64.b64encode(buffered.getvalue()).decode("utf-8")


def generate_tags(image_path):
    encoded_image = encode_image(image_path)

    prompt = f"""You are an expert at creating consistent, accurate, and non-biased tags to describe images. Analyze the following image and provide a comma-delimited list of 10-20 descriptive tags. Do not mention product names. All tags should be lowercase. Always start the list with 'zsrtcar the car'. The tag for the color of the subject should be separate. For example, 'light blue', 'red', 'metallic green', 'white'. Do not end the list with a period.

    Here's the image to analyze:
    [image]"""

    body = json.dumps(
        {
            "anthropic_version": "bedrock-2023-05-31",
            "max_tokens": 500,
            "messages": [
                {
                    "role": "user",
                    "content": [
                        {"type": "text", "text": prompt},
                        {
                            "type": "image",
                            "source": {
                                "type": "base64",
                                "media_type": "image/png",
                                "data": encoded_image,
                            },
                        },
                    ],
                }
            ],
        }
    )

    response = bedrock.invoke_model(
        body=body,
        modelId="anthropic.claude-3-5-sonnet-20240229-v1:0",
        contentType="application/json",
        accept="application/json",
    )

    response_body = json.loads(response.get("body").read())
    return response_body["content"][0]["text"].strip()


def process_images():
    input_dir = "images"
    output_dir = "image_captions"

    if not os.path.exists(output_dir):
        os.makedirs(output_dir)

    for filename in os.listdir(input_dir):
        if filename.lower().endswith((".png", ".jpg", ".jpeg")):
            input_path = os.path.join(input_dir, filename)
            output_path = os.path.join(
                output_dir, os.path.splitext(filename)[0] + ".txt"
            )

            try:
                tags = generate_tags(input_path)
                with open(output_path, "w") as f:
                    f.write(tags)
                print(f"Successfully created tags for {filename}")
            except Exception as e:
                print(f"Error processing {filename}: {str(e)}")


if __name__ == "__main__":
    process_images()        

The tags that Claude generated were effective in fine-tuning the model.

zsrtcar the car, green, coupe, sports car, motion blur, trees in background, road, double yellow line, automotive photography, italian style        

Training images and corresponding text files containing the Claude-generated tags were uploaded into Civitai’s LoRA Trainer UI and used to successfully fine-tune a copy of Stable Diffusion XL 1.0 (SDXL) using LoRA.

Civitai's LoRA Trainer UI, showing pre-generated tags


Intermediate checkpoint results of LoRA fine-tuning of SDXL 1.0

Alternatively, we can use Anthropic’s Workbench to develop the code necessary to interact with Amazon Bedrock, Google Vertex, or Anthropic’s API directly in Python or TypeScript.

Anthropic’s Workbench UI

Conclusion

In this post, we learned how to utilize the latest multimodal foundation models, such as the Claude family from Anthropic, to automate the tedious and often error-prone process of creating labeled datasets for training and fine-tuning Generative AI models. Although this post focused on text-to-image generation model fine-tuning data, the same techniques can be applied to training and fine-tuning text-to-text, embedding, and other model types.

All of the prompts used in this post can also be found on GitHub.


This blog represents my viewpoints and not those of my employer, Amazon Web Services (AWS). All product names, images, logos, and brands are the property of their respective owners.

Gary Stafford

Principal Solutions Architect @AWS | Data Analytics and Generative AI Specialist | Experienced Technology Leader, Consultant, CTO, COO, President | 10x AWS Certified

2 个月

All prompts on GitHub

回复
Anthony Alcaraz

Senior AI/ML Strategist Startups & VC @AWS - Writing on AI/ML, analysis are my own ??

2 个月

Awesome ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了