预定/报价
人工智能代写|CSCI-561 – Fall 2022 – Foundations of Artificial Intelligence Homework 3
人工智能代写|CSCI-561 – Fall 2022 – Foundations of Artificial Intelligence Homework 3
yet2024-07-25 17:20:41

In this homework assignment, you will implement a multi-layer perceptron (MLP) neural network and use it to classify data from four different datasets, shown in Figure 1. Your implementation will be made from scratch, using no external libraries other than Numpy; machine learning libraries are NOT allowed (e.g.Scipy, TensorFlow, Caffe, PyTorch, Torch, mxnet, etc.).

You will train and test your neural network implementation on four datasets inspired by the TensorFlow Neural Network Playground (https://playground.tensorflow.org). We encourage you to visit this site and experiment with various model settings and datasets.There are 4 files associated with each dataset. The files have the following naming scheme:

where <name> is one of the 4 dataset names: spiralcirclexoror gaussianAs a result, there are a total of 16 data files, all of which can be found in HW3->resource->asnlib->public.

Below is a visual representation of each dataset along with a brief description.

Spiral

Both classes are interwoven in a spiral pattern.

Data points are subject to noise along their respective spiral and thus may overlap.

Gaussian

Data points are generated and classified according to two Gaussian distributions. The distributions have different means, but samples may overlap as pictured.

XOR

Data points classified according to the XOR function. Noise may push data classes over XOR “boundaries” as seen in the figure below.

Gaussian

Data points are generated and classified according to two Gaussian distributions. The distributions have different means, but samplesmay overlap as pictured.

 Circle

Data points are generated and classified according to two annuli (rings) sharing a common center. Although the annuli are not overlapping,noise may push data points across the gap

The train and test files for each dataset represent an 80/20 train/test split. You are welcome to aggregate the data from each set and re-split to your liking. All datasets have 2-dimensional data points (the x,y coordinates of the point in R 2 ), along with binary labels (either 0 or 1).

Your task is to implement a multi-hidden-layer neural network learner (see model description part for additional details), that will do the following. For a given dataset,

Your program will take three input files (provided as paths through command line arguments) and produce one output file as follows:

run your_program train_data.csv train_label.csv test_data.csv

⇒ test_predictions.csv

For example,

python3 NeuralNetwork3.py train_data.csv train_label.csv test_data.csv

⇒ test_predictions.csv

In other words, your algorithm file NeuralNetwork.* will take training data, training labels, and testing data as inputs, and output classification predictions on the testing data. Note that your neural network implementation should not depend on which of the four datasets are provided during a given execution; your script will only receive the training data/labels and test data for a single dataset type at a time.