Experiment 2

Implementation of Activation Functions in Artificial Neural Network

Theory

The activation function is used to calculate the output response of a neuron. Different activation functions have different properties:

1. Binary Step Activation Function:

Binary step function is a threshold-based activation function which returns '0' if the input is less than zero, and '1' otherwise.


                    import numpy as np 
                    import matplotlib.pyplot as plt 
                    
                    # Binary Step Activation Function 
                    def binaryStep(x): 
                        return np.heaviside(x, 1) 
                    
                    x = np.linspace(-10, 10) 
                    plt.plot(x, binaryStep(x)) 
                    plt.axis('tight') 
                    plt.title('Activation Function: Binary Step') 
                    plt.show() 
    

2. Linear Activation Function:

The Linear Activation Function returns the input as it is. It is represented by f(x) = x.


                    import numpy as np 
                    import matplotlib.pyplot as plt 
                    
                    # Linear Activation Function 
                    def linear(x): 
                        return x 
                    
                    x = np.linspace(-10, 10) 
                    plt.plot(x, linear(x)) 
                    plt.axis('tight') 
                    plt.title('Activation Function: Linear') 
                    plt.show() 
    

3. Sigmoid Activation Function:

The Sigmoid Activation Function returns values between 0 and 1, suitable for binary classification problems.


                    import numpy as np 
                    import matplotlib.pyplot as plt 
                    
                    # Sigmoid Activation Function 
                    def sigmoid(x): 
                        return 1 / (1 + np.exp(-x)) 
                    
                    x = np.linspace(-10, 10) 
                    plt.plot(x, sigmoid(x)) 
                    plt.axis('tight') 
                    plt.title('Activation Function: Sigmoid') 
                    plt.show() 
    

4. Bipolar Sigmoid Activation Function:

The Bipolar Sigmoid Activation Function returns values between -1 and 1.


                    import numpy as np 
                    import matplotlib.pyplot as plt 
                    
                    # Bipolar Sigmoid Activation Function 
                    def bipolarSigmoid(x): 
                        return -1 + 2 / (1 + np.exp(-x)) 
                    
                    x = np.linspace(-10, 10) 
                    plt.plot(x, bipolarSigmoid(x)) 
                    plt.axis('tight') 
                    plt.title('Activation Function: Bipolar Sigmoid') 
                    plt.show() 
                

5. Hyperbolic Tangent (tanh) Activation Function:

The Hyperbolic Tangent Activation Function returns values between -1 and 1, similar to the bipolar sigmoid but with steeper gradients.


                    import numpy as np 
                    import matplotlib.pyplot as plt 
                    
                    # Hyperbolic Tangent (tanh) Activation Function 
                    def tanh(x): 
                        return np.tanh(x) 
                    
                    x = np.linspace(-10, 10) 
                    plt.plot(x, tanh(x)) 
                    plt.axis('tight') 
                    plt.title('Activation Function: Hyperbolic Tangent (tanh)') 
                    plt.show() 
                

6. ReLU (Rectified Linear Unit) Activation Function:

The ReLU Activation Function returns the input if it is positive, and zero otherwise.


                    import numpy as np 
                    import matplotlib.pyplot as plt 
                    
                    # ReLU (Rectified Linear Unit) Activation Function 
                    def ReLU(x): 
                        return np.maximum(0, x) 
                    
                    x = np.linspace(-10, 10) 
                    plt.plot(x, ReLU(x)) 
                    plt.axis('tight') 
                    plt.title('Activation Function: ReLU (Rectified Linear Unit)') 
                    plt.show() 
    

7. Leaky ReLU Activation Function:

The Leaky ReLU Activation Function is similar to ReLU, but allows a small gradient when the input is negative.


                    import numpy as np 
                    import matplotlib.pyplot as plt 
    
                    # Leaky ReLU Activation Function 
                    def leakyReLU(x): 
                        return np.maximum(0.01 * x, x)  # Using a small slope (0.01) for negative input
    
                    x = np.linspace(-10, 10) 
                    plt.plot(x, leakyReLU(x)) 
                    plt.axis('tight') 
                    plt.title('Activation Function: Leaky ReLU') 
                    plt.show() 
                

8. Softmax Activation Function:

The Softmax Activation Function is used in the output layer of neural network models for multi-class classification problems. It returns a probability distribution over multiple classes.


                    import numpy as np 
                    import matplotlib.pyplot as plt 
    
                    # Softmax Activation Function 
                    def softmax(x): 
                        exp_vals = np.exp(x - np.max(x, axis=0))  # Subtracting the maximum value for numerical stability
                        return exp_vals / np.sum(exp_vals, axis=0)
    
                    x = np.linspace(-10, 10) 
                    plt.plot(x, softmax(x)) 
                    plt.axis('tight') 
                    plt.title('Activation Function: Softmax') 
                    plt.show() 
                

Conclusion

The choice of activation function has a significant impact on the performance and behavior of neural networks. Each activation function has its own advantages and disadvantages, and the selection should be based on the specific requirements of the problem at hand.

Reference: Neural Network: A Comprehensive Foundation by Simon Haykin