I can create a simple dataset of houses and integrate it into the code example. Here's an updated version of the code with a sample dataset:
Code
import torch
import torch.nn as nn
import torch.optim as optim
import pandas as pd
# Create a sample dataset of houses
data = {
'Bedrooms': [3, 4, 2, 2, 3, 4, 5, 3, 2, 4],
'Square Footage': [1500, 2000, 1000, 1200, 1800, 2500, 3000, 2000, 1500, 2500],
'Location': ['Urban', 'Suburban', 'Rural', 'Urban', 'Suburban', 'Rural', 'Urban', 'Suburban', 'Rural', 'Urban'],
'Price': [250000, 300000, 180000, 220000, 280000, 320000, 350000, 300000, 200000, 280000]
}
df = pd.DataFrame(data)
# Convert the dataset to tensors
inputs = torch.tensor(df[['Bedrooms', 'Square Footage']].values, dtype=torch.float)
targets = torch.tensor(df['Price'].values, dtype=torch.float).view(-1, 1)
# Define the neural network
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(2, 10) # input layer (2) -> hidden layer (10)
self.fc2 = nn.Linear(10, 1) # hidden layer (10) -> output layer (1)
def forward(self, x):
x = torch.relu(self.fc1(x)) # activation function for hidden layer
x = self.fc2(x)
return x
# Initialize the network and loss function
net = Net()
criterion = nn.MSELoss()
# Train the network
optimizer = optim.SGD(net.parameters(), lr=0.01)
for epoch in range(100):
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
print(f'Epoch {epoch+1}, Loss: {loss.item()}')
Dataset
The dataset consists of 10 houses with the following features:
Bedrooms
: The number of bedrooms in the houseSquare Footage
: The square footage of the houseLocation
: The location of the house (Urban, Suburban, or Rural)Price
: The price of the house
The dataset is converted to tensors using the pd.DataFrame
and torch.tensor
functions.
Training
The network is trained using the mean squared error (MSE) loss function and the stochastic gradient descent (SGD) optimizer. The training loop iterates over the dataset 100 times, computing the loss and gradients at each iteration.
I hope this updated code example helps! Let me know if you have any further questions.