Not much to add. This is my third iteration of my implementations of online perceptrons. The previous implementations are here and here.
This time I am going to implement using CLOS because I need an object with a protected state that changed every time we feed the perceptron with a new training example.
(defclass perceptron ()
((size :initarg :size :accessor perceptron-size)
(fn :initarg :fn)
(eta :initarg :eta :accessor perceptron-rate)
(derivative :accessor perceptron-der)
(weights :accessor perceptron-weights)))
(defmethod initialize-instance :after ((p perceptron) &rest args)
(setf (perceptron-weights p)
(make-list (1+ (perceptron-size p)) :initial-element 0.0)))#<STANDARD-CLASS COMMON-LISP-USER::PERCEPTRON>
#<STANDARD-METHOD COMMON-LISP:INITIALIZE-INSTANCE :AFTER (PERCEPTRON) {1004D48D63}>I am going to need to methods: one for feed-forward, and the other is for back-propagation.
(defmethod forward ((p perceptron) x-val)
(let* ((input (cons 1.0 x-val))
(f (slot-value p 'fn))
(calc (funcall f (reduce #'+ (mapcar #'* (perceptron-weights p) input))))
(eta (perceptron-rate p))
(eps (/ eta 2.0)))
(setf (perceptron-der p)
(/ (- (funcall f (+ calc eps))
(funcall f (- calc eps)))
eta))
calc))
(defmethod train ((p perceptron) y-val x-val)
(let* ((update (* (- y-val (forward p x-val))
(perceptron-rate p)
(/ (+ (perceptron-der p) (random 1d-9)))))
(weights (perceptron-weights p))
(input (cons 1.0 x-val)))
(dotimes (i (length weights))
(incf (elt weights i)
(* update (elt input i))))
(setf (perceptron-weights p) weights)))#<STANDARD-METHOD COMMON-LISP-USER::FORWARD (PERCEPTRON T) {1004EED3D3}>
#<STANDARD-METHOD COMMON-LISP-USER::TRAIN (PERCEPTRON T T) {10050B3563}>For the tests, I am going to use the same dataset I used for the scala implementation:
(defparameter data
(let (res)
(with-open-file (in "data/skin.csv" :direction :input)
(do ((line (read-line in nil) (read-line in nil)))
((null line) res)
(push (reverse (mapcar #'read-from-string (ppcre:split "\t" line))) res)))))
(defparameter n (length data))DATA
NAnd now the training and testing:
(defun logit (x) (/ 1d0 (+ 1d0 (exp (- x)))))
(let ((node (make-instance 'perceptron
:size 3
:eta 5d-4
:fn #'logit)))
(loop repeat 20000 do
(let ((point (elt data (random n))))
(train node (- (car point) 1.0) (cdr point))))
(/ (loop repeat 1000 sum
(let* ((point (elt data (random n)))
(delta (- (forward node (cdr point)) (- (car point) 1.0))))
(if (< (abs delta) 1d-2) 1 0)))
10.0))LOGIT
90.6Pretty nice, approximately 91%.