1. From a List of Tuples
This method is great for small datasets where rows can be manually defined.2. From a List of Dictionaries
In this example we are not providing the schema details and Spark automatically infers the schema from the keys and values in dictionary.3. From an RDD
RDDs (Resilient Distributed Datasets) are the foundation of Spark, and you can convert them to DataFrames.4. From a Pandas DataFrame
If you already have a pandas DataFrame, you can convert it to a PySpark DataFrame.5. From a CSV File
This is useful for loading larger datasets stored in files.6. From a JSON File
Create a DataFrame directly from a JSON file.7. Programmatically with Row Objects
Row objects allow for more structured data creation.8. Using Range Function
Userange
to create a DataFrame with a sequence of numbers.