Not much to add. This is my third iteration of my implementations of online perceptrons. The previous implementations are here and here.
This time I am going to implement using CLOS because I need an object with a protected state that changed every time we feed the perceptron with a new training example.
defclass perceptron ()
(
((size :initarg :size :accessor perceptron-size)
(fn :initarg :fn)
(eta :initarg :eta :accessor perceptron-rate)
(derivative :accessor perceptron-der)
(weights :accessor perceptron-weights)))
defmethod initialize-instance :after ((p perceptron) &rest args)
(setf (perceptron-weights p)
(make-list (1+ (perceptron-size p)) :initial-element 0.0))) (
#<STANDARD-CLASS COMMON-LISP-USER::PERCEPTRON>1004D48D63}> #<STANDARD-METHOD COMMON-LISP:INITIALIZE-INSTANCE :AFTER (PERCEPTRON) {
I am going to need to methods: one for feed-forward, and the other is for back-propagation.
defmethod forward ((p perceptron) x-val)
(let* ((input (cons 1.0 x-val))
(slot-value p 'fn))
(f (funcall f (reduce #'+ (mapcar #'* (perceptron-weights p) input))))
(calc (
(eta (perceptron-rate p))/ eta 2.0)))
(eps (setf (perceptron-der p)
(/ (- (funcall f (+ calc eps))
(funcall f (- calc eps)))
(
eta))
calc))
defmethod train ((p perceptron) y-val x-val)
(let* ((update (* (- y-val (forward p x-val))
(
(perceptron-rate p)/ (+ (perceptron-der p) (random 1d-9)))))
(
(weights (perceptron-weights p))cons 1.0 x-val)))
(input (dotimes (i (length weights))
(incf (elt weights i)
(* update (elt input i))))
(setf (perceptron-weights p) weights))) (
1004EED3D3}>
#<STANDARD-METHOD COMMON-LISP-USER::FORWARD (PERCEPTRON T) {10050B3563}> #<STANDARD-METHOD COMMON-LISP-USER::TRAIN (PERCEPTRON T T) {
For the tests, I am going to use the same dataset I used for the scala implementation:
defparameter data
(let (res)
(with-open-file (in "data/skin.csv" :direction :input)
(do ((line (read-line in nil) (read-line in nil)))
(null line) res)
((push (reverse (mapcar #'read-from-string (ppcre:split "\t" line))) res)))))
(
defparameter n (length data)) (
DATA N
And now the training and testing:
defun logit (x) (/ 1d0 (+ 1d0 (exp (- x)))))
(
let ((node (make-instance 'perceptron
(3
:size 5d-4
:eta #'logit)))
:fn loop repeat 20000 do
(let ((point (elt data (random n))))
(- (car point) 1.0) (cdr point))))
(train node (/ (loop repeat 1000 sum
(let* ((point (elt data (random n)))
(- (forward node (cdr point)) (- (car point) 1.0))))
(delta (if (< (abs delta) 1d-2) 1 0)))
(10.0))
LOGIT90.6
Pretty nice, approximately 91%.