Your First Deep Learning Project in Python with Keras Step-By ...

文章推薦指數: 80 %
投票人數:10人

Your First Deep Learning Project in Python with Keras Step-By-Step · 1. Load Data · 2. Define Keras Model · 3. Compile Keras Model · 4. Fit Keras ... Navigation Home MainMenuGetStarted Blog Topics DeepLearning(keras) ComputerVision NeuralNetTimeSeries NLP(Text) GANs LSTMs BetterDeepLearning Calculus IntrotoAlgorithms CodeAlgorithms IntrotoTimeSeries Python(scikit-learn) EnsembleLearning ImbalancedLearning DataPreparation R(caret) Weka(nocode) LinearAlgebra Statistics Optimization Probability XGBoost PythonforMachineLearning EBooks FAQ About Contact ReturntoContent ByJasonBrownleeonJune18,2022inDeepLearning Tweet Tweet Share Share LastUpdatedonJune20,2022 Kerasisapowerfulandeasy-to-usefreeopensourcePythonlibraryfordevelopingandevaluatingdeeplearningmodels. Itispartofthe TensorFlowlibraryandallowsyoutodefineandtrainneuralnetworkmodelsinjustafewlinesofcode. Inthistutorial,youwilldiscoverhowtocreateyourfirstdeeplearningneuralnetworkmodelinPythonusingKeras. Kick-startyourprojectwithmynewbookDeepLearningWithPython,includingstep-by-steptutorialsandthePythonsourcecodefilesforallexamples. Let’sgetstarted. UpdateFeb/2017:UpdatedpredictionexamplesoroundingworksinPython2and3. UpdateMar/2017:UpdatedexampleforthelatestversionsofKerasandTensorFlow. UpdateMar/2018:Addedalternatelinktodownloadthedataset. UpdateJul/2019:Expandedandaddedmoreusefulresources. UpdateSep/2019:UpdatedforKerasv2.2.5API. UpdateOct/2019:UpdatedforKerasv2.3.0APIandTensorFlowv2.0.0. UpdateAug/2020:UpdatedforKerasv2.4.3andTensorFlowv2.3. UpdateOct/2021:Deprecatedpredict_classsyntax UpdateJun/2022:UpdatedtomodernTensorFlowsyntax DevelopYourFirstNeuralNetworkinPythonWithKerasStep-By-StepPhotobyPhilWhitehouse,somerightsreserved. KerasTutorialOverview Thereisnotalotofcoderequired,butwearegoingtostepoveritslowlysothatyouwillknowhowtocreateyourownmodelsinthefuture. Thestepsyouaregoingtocoverinthistutorialareasfollows: LoadData. DefineKerasModel. CompileKerasModel. FitKerasModel. EvaluateKerasModel. TieItAllTogether. MakePredictions ThisKerastutorialhasafewrequirements: YouhavePython2or3installedandconfigured. YouhaveSciPy(includingNumPy)installedandconfigured. YouhaveKerasandabackend(TheanoorTensorFlow)installedandconfigured. Ifyouneedhelpwithyourenvironment,seethe tutorial: HowtoSetupaPythonEnvironmentforDeepLearning Createanewfilecalledkeras_first_network.pyandtypeorcopy-and-pastethecodeintothefileasyougo. NeedhelpwithDeepLearninginPython? Takemyfree2-weekemailcourseanddiscoverMLPs,CNNsandLSTMs(withcode). Clicktosign-upnowandalsogetafreePDFEbookversionofthecourse. StartYourFREEMini-CourseNow 1.LoadData Thefirststepistodefinethefunctionsandclassesweintendtouseinthistutorial. WewillusetheNumPylibrarytoloadourdatasetandwewillusetwoclassesfromtheKeraslibrarytodefineourmodel. Theimportsrequiredarelistedbelow. #firstneuralnetworkwithkerastutorial fromnumpyimportloadtxt fromtensorflow.keras.modelsimportSequential fromtensorflow.keras.layersimportDense ... 12345 #firstneuralnetworkwithkerastutorialfromnumpyimportloadtxtfromtensorflow.keras.modelsimportSequentialfromtensorflow.keras.layersimportDense... Wecannowloadourdataset. InthisKerastutorial,wearegoingtousethePimaIndiansonsetofdiabetesdataset.ThisisastandardmachinelearningdatasetfromtheUCIMachineLearningrepository.ItdescribespatientmedicalrecorddataforPimaIndiansandwhethertheyhadanonsetofdiabeteswithinfiveyears. Assuch,itisabinaryclassificationproblem(onsetofdiabetesas1ornotas0).Alloftheinputvariablesthatdescribeeachpatientarenumerical.Thismakesiteasytousedirectlywithneuralnetworksthatexpectnumericalinputandoutputvalues,andidealforourfirstneuralnetworkinKeras. Thedatasetisavailablefromhere: DatasetCSVFile(pima-indians-diabetes.csv) DatasetDetails Downloadthedatasetandplaceitinyourlocalworkingdirectory,thesamelocationasyourpythonfile. Saveitwiththefilename: pima-indians-diabetes.csv 1 pima-indians-diabetes.csv Takealookinsidethefile,youshouldseerowsofdatalikethefollowing: 6,148,72,35,0,33.6,0.627,50,1 1,85,66,29,0,26.6,0.351,31,0 8,183,64,0,0,23.3,0.672,32,1 1,89,66,23,94,28.1,0.167,21,0 0,137,40,35,168,43.1,2.288,33,1 ... 123456 6,148,72,35,0,33.6,0.627,50,11,85,66,29,0,26.6,0.351,31,08,183,64,0,0,23.3,0.672,32,11,89,66,23,94,28.1,0.167,21,00,137,40,35,168,43.1,2.288,33,1... WecannowloadthefileasamatrixofnumbersusingtheNumPyfunctionloadtxt(). Thereareeightinputvariablesandoneoutputvariable(thelastcolumn).Wewillbelearningamodeltomaprowsofinputvariables(X)toanoutputvariable(y),whichweoftensummarizeasy=f(X). Thevariablescanbesummarizedasfollows: InputVariables(X): Numberoftimespregnant Plasmaglucoseconcentrationa2hoursinanoralglucosetolerancetest Diastolicbloodpressure(mmHg) Tricepsskinfoldthickness(mm) 2-Hourseruminsulin(muU/ml) Bodymassindex(weightinkg/(heightinm)^2) Diabetespedigreefunction Age(years) OutputVariables(y): Classvariable(0or1) OncetheCSVfileisloadedintomemory,wecansplitthecolumnsofdataintoinputandoutputvariables. Thedatawillbestoredina2Darraywherethefirstdimensionisrowsandtheseconddimensioniscolumns,e.g.[rows,columns]. WecansplitthearrayintotwoarraysbyselectingsubsetsofcolumnsusingthestandardNumPysliceoperatoror“:”Wecanselectthefirst8columnsfromindex0toindex7viatheslice0:8.Wecanthenselecttheoutputcolumn(the9thvariable)viaindex8. ... #loadthedataset dataset=loadtxt('pima-indians-diabetes.csv',delimiter=',') #splitintoinput(X)andoutput(y)variables X=dataset[:,0:8] y=dataset[:,8] ... 1234567 ...#loadthedatasetdataset=loadtxt('pima-indians-diabetes.csv',delimiter=',')#splitintoinput(X)andoutput(y)variablesX=dataset[:,0:8]y=dataset[:,8]... Wearenowreadytodefineourneuralnetworkmodel. Note,thedatasethas9columnsandtherange0:8willselectcolumnsfrom0to7,stoppingbeforeindex8.Ifthisisnewtoyou,thenyoucanlearnmoreaboutarrayslicingandrangesinthispost: HowtoIndex,SliceandReshapeNumPyArraysforMachineLearninginPython 2.DefineKerasModel ModelsinKerasaredefinedasasequenceoflayers. WecreateaSequentialmodelandaddlayersoneatatimeuntilwearehappywithournetworkarchitecture. Thefirstthingtogetrightistoensuretheinputlayerhastherightnumberofinputfeatures.Thiscanbespecifiedwhencreatingthefirstlayerwiththeinput_shapeargumentandsettingitto(8,)forpresentingthe8inputvariablesasavector. Howdoweknowthenumberoflayersandtheirtypes? Thisisaveryhardquestion.Thereareheuristicsthatwecanuseandoftenthebestnetworkstructureisfoundthroughaprocessoftrialanderrorexperimentation(Iexplainmoreaboutthishere).Generally,youneedanetworklargeenoughtocapturethestructureoftheproblem. Inthisexample,wewilluseafully-connectednetworkstructurewiththreelayers. FullyconnectedlayersaredefinedusingtheDenseclass.Wecanspecifythenumberofneuronsornodesinthelayerasthefirstargument, andspecifytheactivationfunctionusingtheactivationargument. WewillusetherectifiedlinearunitactivationfunctionreferredtoasReLUonthefirsttwolayersandtheSigmoidfunctionintheoutputlayer. ItusedtobethecasethatSigmoidandTanhactivationfunctionswerepreferredforalllayers.Thesedays,betterperformanceisachievedusingtheReLUactivationfunction.Weuseasigmoidontheoutputlayertoensureournetworkoutputisbetween0and1andeasytomaptoeitheraprobabilityofclass1orsnaptoahardclassificationofeitherclasswithadefaultthresholdof0.5. Wecanpieceitalltogetherbyaddingeachlayer: Themodelexpectsrowsofdatawith8variables(theinput_shape=(8,)argument) Thefirsthiddenlayerhas12nodesandusesthereluactivationfunction. Thesecondhiddenlayerhas8nodesandusesthereluactivationfunction. Theoutputlayerhasonenodeandusesthesigmoidactivationfunction. ... #definethekerasmodel model=Sequential() model.add(Dense(12,input_shape=(8,),activation='relu')) model.add(Dense(8,activation='relu')) model.add(Dense(1,activation='sigmoid')) ... 1234567 ...#definethekerasmodelmodel=Sequential()model.add(Dense(12,input_shape=(8,),activation='relu'))model.add(Dense(8,activation='relu'))model.add(Dense(1,activation='sigmoid'))... Note,themostconfusingthinghereisthattheshapeoftheinputtothemodelisdefinedasanargumentonthefirsthiddenlayer.ThismeansthatthelineofcodethataddsthefirstDenselayerisdoing2things,definingtheinputorvisiblelayerandthefirsthiddenlayer. 3.CompileKerasModel Nowthatthemodelisdefined,wecancompileit. Compilingthemodelusestheefficientnumericallibrariesunderthecovers(theso-calledbackend)suchasTheanoorTensorFlow.Thebackend automaticallychoosesthebestwaytorepresentthenetworkfortrainingandmakingpredictionstorunonyourhardware,suchasCPUorGPUorevendistributed. Whencompiling,wemustspecifysomeadditionalpropertiesrequiredwhentrainingthenetwork.Remembertraininganetworkmeansfindingthebestsetofweightstomapinputstooutputsinourdataset. Wemustspecifythelossfunctiontousetoevaluateasetofweights,theoptimizerisusedtosearchthroughdifferentweightsforthenetworkandanyoptionalmetricswewouldliketocollectandreportduringtraining. Inthiscase,wewillusecrossentropyasthelossargument.ThislossisforabinaryclassificationproblemsandisdefinedinKerasas“binary_crossentropy“.Youcanlearnmoreaboutchoosinglossfunctionsbasedonyourproblemhere: HowtoChooseLossFunctionsWhenTrainingDeepLearningNeuralNetworks Wewilldefinetheoptimizerastheefficientstochasticgradientdescentalgorithm“adam“.Thisisapopularversionofgradientdescentbecauseitautomaticallytunesitselfandgivesgoodresultsinawiderangeofproblems.TolearnmoreabouttheAdamversionofstochasticgradientdescentseethepost: GentleIntroductiontotheAdamOptimizationAlgorithmforDeepLearning Finally,becauseitisaclassificationproblem,wewillcollectandreporttheclassificationaccuracy,definedviathemetricsargument. ... #compilethekerasmodel model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) ... 1234 ...#compilethekerasmodelmodel.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])... 4.FitKerasModel Wehavedefinedourmodelandcompileditreadyforefficientcomputation. Nowitistimetoexecutethemodelonsomedata. Wecantrainorfitourmodelonourloadeddatabycallingthefit()functiononthemodel. Trainingoccursoverepochsandeachepochissplitintobatches. Epoch:Onepassthroughalloftherowsinthetrainingdataset. Batch:Oneormoresamplesconsideredbythemodelwithinanepochbeforeweightsareupdated. Oneepochiscomprisedofoneormorebatches,basedonthechosenbatchsizeandthemodelisfitformanyepochs.Formoreonthedifferencebetweenepochsandbatches,seethepost: WhatistheDifferenceBetweenaBatchandanEpochinaNeuralNetwork? Thetrainingprocesswillrunforafixednumberofiterationsthroughthedatasetcalledepochs,thatwemustspecifyusingtheepochsargument.Wemustalsosetthenumberofdatasetrowsthatareconsideredbeforethemodelweightsareupdatedwithineachepoch,calledthebatchsizeandsetusingthebatch_sizeargument. Forthisproblem,wewillrunforasmallnumberofepochs(150)andusearelativelysmallbatchsizeof10. Theseconfigurationscanbechosenexperimentallybytrialanderror.Wewanttotrainthemodelenoughsothatitlearnsagood(orgoodenough)mappingofrowsofinputdatatotheoutputclassification.Themodelwillalwayshavesomeerror,buttheamountoferrorwillleveloutaftersomepointforagivenmodelconfiguration.Thisiscalledmodelconvergence. ... #fitthekerasmodelonthedataset model.fit(X,y,epochs=150,batch_size=10) ... 1234 ...#fitthekerasmodelonthedatasetmodel.fit(X,y,epochs=150,batch_size=10)... ThisiswheretheworkhappensonyourCPUorGPU. NoGPUisrequiredforthisexample,butifyou’reinterestedinhowtorunlargemodelsonGPUhardwarecheaplyinthecloud,seethispost: HowtoSetupAmazonAWSEC2GPUstoTrainKerasDeepLearningModels 5.EvaluateKerasModel Wehavetrainedourneuralnetworkontheentiredatasetandwecanevaluatetheperformanceofthenetworkonthesamedataset. Thiswillonlygiveusanideaofhowwellwehavemodeledthedataset(e.g.trainaccuracy),butnoideaofhowwellthealgorithmmightperformonnewdata.Wehavedonethisforsimplicity,butideally,youcouldseparateyourdataintotrainandtestdatasetsfortrainingandevaluationofyourmodel. Youcanevaluateyourmodelonyourtrainingdatasetusingtheevaluate()functiononyourmodelandpassitthesameinputandoutputusedtotrainthemodel. Thiswillgenerateapredictionforeachinputandoutputpairandcollectscores,includingtheaveragelossandanymetricsyouhaveconfigured,suchasaccuracy. Theevaluate()functionwillreturnalistwithtwovalues.Thefirstwillbethelossofthemodelonthedatasetandthesecondwillbetheaccuracyofthemodelonthedataset.Weareonlyinterestedinreportingtheaccuracy,sowewillignorethelossvalue. ... #evaluatethekerasmodel _,accuracy=model.evaluate(X,y) print('Accuracy:%.2f'%(accuracy*100)) 1234 ...#evaluatethekerasmodel_,accuracy=model.evaluate(X,y)print('Accuracy:%.2f'%(accuracy*100)) 6.TieItAllTogether YouhavejustseenhowyoucaneasilycreateyourfirstneuralnetworkmodelinKeras. Let’stieitalltogetherintoacompletecodeexample. #firstneuralnetworkwithkerastutorial fromnumpyimportloadtxt fromtensorflow.keras.modelsimportSequential fromtensorflow.keras.layersimportDense #loadthedataset dataset=loadtxt('pima-indians-diabetes.csv',delimiter=',') #splitintoinput(X)andoutput(y)variables X=dataset[:,0:8] y=dataset[:,8] #definethekerasmodel model=Sequential() model.add(Dense(12,input_shape=(8,),activation='relu')) model.add(Dense(8,activation='relu')) model.add(Dense(1,activation='sigmoid')) #compilethekerasmodel model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) #fitthekerasmodelonthedataset model.fit(X,y,epochs=150,batch_size=10) #evaluatethekerasmodel _,accuracy=model.evaluate(X,y) print('Accuracy:%.2f'%(accuracy*100)) 123456789101112131415161718192021 #firstneuralnetworkwithkerastutorialfromnumpyimportloadtxtfromtensorflow.keras.modelsimportSequentialfromtensorflow.keras.layersimportDense#loadthedatasetdataset=loadtxt('pima-indians-diabetes.csv',delimiter=',')#splitintoinput(X)andoutput(y)variablesX=dataset[:,0:8]y=dataset[:,8]#definethekerasmodelmodel=Sequential()model.add(Dense(12,input_shape=(8,),activation='relu'))model.add(Dense(8,activation='relu'))model.add(Dense(1,activation='sigmoid'))#compilethekerasmodelmodel.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])#fitthekerasmodelonthedatasetmodel.fit(X,y,epochs=150,batch_size=10)#evaluatethekerasmodel_,accuracy=model.evaluate(X,y)print('Accuracy:%.2f'%(accuracy*100)) YoucancopyallofthecodeintoyourPythonfileandsaveitas“keras_first_network.py”inthesamedirectoryasyourdatafile“pima-indians-diabetes.csv“.YoucanthenrunthePythonfileasascriptfromyourcommandline(commandprompt)asfollows: pythonkeras_first_network.py 1 pythonkeras_first_network.py Runningthisexample,youshouldseeamessageforeachofthe150epochsprintingthelossandaccuracy,followedbythefinalevaluationofthetrainedmodelonthetrainingdataset. Ittakesabout10secondstoexecuteonmyworkstationrunningontheCPU. Ideally,wewouldlikethelosstogotozeroandaccuracytogoto1.0(e.g.100%).Thisisnotpossibleforanybutthemosttrivialmachinelearningproblems.Instead,wewillalwayshavesomeerrorinourmodel.Thegoalistochooseamodelconfigurationandtrainingconfigurationthatachievethelowestlossandhighestaccuracypossibleforagivendataset. ... 768/768[==============================]-0s63us/step-loss:0.4817-acc:0.7708 Epoch147/150 768/768[==============================]-0s63us/step-loss:0.4764-acc:0.7747 Epoch148/150 768/768[==============================]-0s63us/step-loss:0.4737-acc:0.7682 Epoch149/150 768/768[==============================]-0s64us/step-loss:0.4730-acc:0.7747 Epoch150/150 768/768[==============================]-0s63us/step-loss:0.4754-acc:0.7799 768/768[==============================]-0s38us/step Accuracy:76.56 123456789101112 ...768/768[==============================]-0s63us/step-loss:0.4817-acc:0.7708Epoch147/150768/768[==============================]-0s63us/step-loss:0.4764-acc:0.7747Epoch148/150768/768[==============================]-0s63us/step-loss:0.4737-acc:0.7682Epoch149/150768/768[==============================]-0s64us/step-loss:0.4730-acc:0.7747Epoch150/150768/768[==============================]-0s63us/step-loss:0.4754-acc:0.7799768/768[==============================]-0s38us/stepAccuracy:76.56 Note,ifyoutryrunningthisexampleinanIPythonor Jupyternotebookyoumaygetanerror. Thereasonisthe outputprogressbarsduringtraining.Youcaneasilyturntheseoffbysettingverbose=0inthecalltothefit()andevaluate()functions,forexample: ... #fitthekerasmodelonthedatasetwithoutprogressbars model.fit(X,y,epochs=150,batch_size=10,verbose=0) #evaluatethekerasmodel _,accuracy=model.evaluate(X,y,verbose=0) ... 123456 ...#fitthekerasmodelonthedatasetwithoutprogressbarsmodel.fit(X,y,epochs=150,batch_size=10,verbose=0)#evaluatethekerasmodel_,accuracy=model.evaluate(X,y,verbose=0)... Note:Yourresultsmayvarygiventhestochasticnatureofthealgorithmorevaluationprocedure,ordifferencesinnumericalprecision.Considerrunningtheexampleafewtimesandcomparetheaverageoutcome. Whatscoredidyouget? Postyourresultsinthecommentsbelow. Neuralnetworksareastochasticalgorithm,meaningthatthesamealgorithmonthesamedatacantrainadifferentmodelwithdifferentskilleachtimethecodeisrun.Thisisafeature,notabug.Youcanlearnmoreaboutthisinthepost: EmbraceRandomnessinMachineLearning Thevarianceintheperformanceofthemodelmeansthattogetareasonableapproximationofhowwellyourmodelisperforming,youmayneedtofititmanytimesandcalculatetheaverageoftheaccuracyscores.Formoreonthisapproachtoevaluatingneuralnetworks,seethepost: HowtoEvaluatetheSkillofDeepLearningModels Forexample,belowaretheaccuracyscoresfromre-runningtheexample5times: Accuracy:75.00 Accuracy:77.73 Accuracy:77.60 Accuracy:78.12 Accuracy:76.17 12345 Accuracy:75.00Accuracy:77.73Accuracy:77.60Accuracy:78.12Accuracy:76.17 Wecanseethatallaccuracyscoresarearound77%andtheaverageis76.924%. 7.MakePredictions ThenumberonequestionIgetaskedis: AfterItrainmymodel,howcanIuseittomakepredictionsonnewdata? Greatquestion. Wecanadapttheaboveexampleanduseitto generatepredictionsonthetrainingdataset,pretendingitisanewdatasetwehavenotseenbefore. Makingpredictionsisaseasyascallingthepredict()functiononthemodel.Weareusingasigmoidactivationfunctionontheoutputlayer,sothepredictionswillbeaprobabilityintherangebetween0and1.Wecaneasilyconvertthemintoacrispbinarypredictionforthisclassificationtaskbyroundingthem. Forexample: ... #makeprobabilitypredictionswiththemodel predictions=model.predict(X) #roundpredictions rounded=[round(x[0])forxinpredictions] 12345 ...#makeprobabilitypredictionswiththemodelpredictions=model.predict(X)#roundpredictionsrounded=[round(x[0])forxinpredictions] Alternately,wecanconverttheprobabilityinto0or1topredictcrispclassesdirectly,forexample: ... #makeclasspredictionswiththemodel predictions=(model.predict(X)>0.5).astype(int) 123 ...#makeclasspredictionswiththemodelpredictions=(model.predict(X)>0.5).astype(int) Thecompleteexamplebelowmakespredictionsforeachexampleinthedataset,thenprintstheinputdata,predictedclassandexpectedclassforthefirst5examplesinthedataset. #firstneuralnetworkwithkerasmakepredictions fromnumpyimportloadtxt fromtensorflow.keras.modelsimportSequential fromtensorflow.keras.layersimportDense #loadthedataset dataset=loadtxt('pima-indians-diabetes.csv',delimiter=',') #splitintoinput(X)andoutput(y)variables X=dataset[:,0:8] y=dataset[:,8] #definethekerasmodel model=Sequential() model.add(Dense(12,input_shape=(8,),activation='relu')) model.add(Dense(8,activation='relu')) model.add(Dense(1,activation='sigmoid')) #compilethekerasmodel model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) #fitthekerasmodelonthedataset model.fit(X,y,epochs=150,batch_size=10,verbose=0) #makeclasspredictionswiththemodel predictions=(model.predict(X)>0.5).astype(int) #summarizethefirst5cases foriinrange(5): print('%s=>%d(expected%d)'%(X[i].tolist(),predictions[i],y[i])) 1234567891011121314151617181920212223 #firstneuralnetworkwithkerasmakepredictionsfromnumpyimportloadtxtfromtensorflow.keras.modelsimportSequentialfromtensorflow.keras.layersimportDense#loadthedatasetdataset=loadtxt('pima-indians-diabetes.csv',delimiter=',')#splitintoinput(X)andoutput(y)variablesX=dataset[:,0:8]y=dataset[:,8]#definethekerasmodelmodel=Sequential()model.add(Dense(12,input_shape=(8,),activation='relu'))model.add(Dense(8,activation='relu'))model.add(Dense(1,activation='sigmoid'))#compilethekerasmodelmodel.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])#fitthekerasmodelonthedatasetmodel.fit(X,y,epochs=150,batch_size=10,verbose=0)#makeclasspredictionswiththemodelpredictions=(model.predict(X)>0.5).astype(int)#summarizethefirst5casesforiinrange(5): print('%s=>%d(expected%d)'%(X[i].tolist(),predictions[i],y[i])) Runningtheexampledoesnotshowtheprogressbarasbeforeaswehavesettheverboseargumentto0. Afterthemodelisfit,predictionsaremadeforallexamplesinthedataset,andtheinputrowsandpredictedclassvalueforthefirst5examplesisprintedandcomparedtotheexpectedclassvalue. Wecanseethatmostrowsarecorrectlypredicted.Infact,wewouldexpectabout76.9%oftherowstobecorrectlypredictedbasedonourestimatedperformanceofthemodelintheprevioussection. [6.0,148.0,72.0,35.0,0.0,33.6,0.627,50.0]=>0(expected1) [1.0,85.0,66.0,29.0,0.0,26.6,0.351,31.0]=>0(expected0) [8.0,183.0,64.0,0.0,0.0,23.3,0.672,32.0]=>1(expected1) [1.0,89.0,66.0,23.0,94.0,28.1,0.167,21.0]=>0(expected0) [0.0,137.0,40.0,35.0,168.0,43.1,2.288,33.0]=>1(expected1) 12345 [6.0,148.0,72.0,35.0,0.0,33.6,0.627,50.0]=>0(expected1)[1.0,85.0,66.0,29.0,0.0,26.6,0.351,31.0]=>0(expected0)[8.0,183.0,64.0,0.0,0.0,23.3,0.672,32.0]=>1(expected1)[1.0,89.0,66.0,23.0,94.0,28.1,0.167,21.0]=>0(expected0)[0.0,137.0,40.0,35.0,168.0,43.1,2.288,33.0]=>1(expected1) IfyouwouldliketoknowmoreabouthowtomakepredictionswithKerasmodels,seethepost: HowtoMakePredictionswithKeras KerasTutorialSummary Inthispost, youdiscoveredhowtocreateyourfirstneuralnetworkmodelusingthepowerfulKerasPythonlibraryfordeeplearning. Specifically,youlearnedthesixkeystepsinusingKerastocreateaneuralnetworkordeeplearningmodel,step-by-stepincluding: Howtoloaddata. HowtodefineaneuralnetworkinKeras. HowtocompileaKerasmodelusingtheefficientnumericalbackend. Howtotrainamodelondata. Howtoevaluateamodelondata. Howtomakepredictionswiththemodel. DoyouhaveanyquestionsaboutKerasoraboutthistutorial? AskyourquestioninthecommentsandIwilldomybesttoanswer. KerasTutorialExtensions Welldone,youhavesuccessfullydevelopedyourfirstneuralnetworkusingtheKerasdeeplearninglibraryinPython. Thissectionprovidessomeextensionstothistutorialthatyoumightwanttoexplore. TunetheModel.Changetheconfigurationofthemodelortrainingprocessandseeifyoucanimprovetheperformanceofthemodel,e.g.achievebetterthan76%accuracy. SavetheModel.Updatethetutorialtosavethemodeltofile,thenloaditlateranduseittomakepredictions(seethistutorial). SummarizetheModel.Updatethetutorialtosummarizethemodelandcreateaplotofmodellayers(seethistutorial). SeparateTrainandTestDatasets.Splittheloadeddatasetintoatrainandtestset(splitbasedonrows)anduseonesettotrainthemodelandtheothersettoestimatetheperformanceofthemodelonnewdata. PlotLearningCurves.Thefit()functionreturnsahistoryobjectthatsummarizesthelossandaccuracyattheendofeachepoch.Createlineplotsofthisdata,calledlearningcurves(seethistutorial). LearnaNewDataset.Updatethetutorialtouseadifferenttabulardataset,perhapsfromtheUCIMachineLearningRepository. UseFunctionalAPI.UpdatethetutorialtousetheKerasFunctionalAPIfordefiningthemodel(seethistutorial). FurtherReading AreyoulookingforsomemoreDeepLearning tutorialswithPythonand Keras? Takealookatsomeofthese: RelatedTutorials 5StepLife-CycleforNeuralNetworkModelsinKeras Multi-ClassClassificationTutorialwiththeKerasDeepLearningLibrary RegressionTutorialwiththeKerasDeepLearningLibraryinPython HowtoGridSearchHyperparametersforDeepLearningModelsinPythonWithKeras Books DeepLearning(Textbook),2016. DeepLearningwithPython(mybook). APIs KerasDeepLearningLibraryHomepage KerasAPIDocumentation Howdidyougo?Doyouhaveanyquestionsaboutdeeplearning? PostyourquestionsinthecommentsbelowandIwilldomybesttohelp. DevelopDeepLearningProjectswithPython!  WhatIfYouCouldDevelopANetworkinMinutes ...withjustafewlinesofPython DiscoverhowinmynewEbook: DeepLearningWithPython Itcoversend-to-endprojectsontopicslike: MultilayerPerceptrons, ConvolutionalNetsand RecurrentNeuralNets,andmore... FinallyBringDeepLearningTo YourOwnProjects SkiptheAcademics.Just Results. SeeWhat'sInside Tweet Tweet Share Share MoreOnThisTopicTensorFlow2Tutorial:GetStartedinDeepLearning…HowtoGetStartedwithDeepLearningforNatural…HowtoDevelopaCNNFromScratchforCIFAR-10Photo…WhatisDeepLearning?AppliedDeepLearninginPythonMini-CourseMulti-LabelClassificationofSatellitePhotosof… AboutJasonBrownlee JasonBrownlee,PhDisamachinelearningspecialistwhoteachesdevelopershowtogetresultswithmodernmachinelearningmethodsviahands-ontutorials. ViewallpostsbyJasonBrownlee→ UsingNormalizationLayerstoImproveDeepLearningModels HowtoSaveandLoadYourKerasDeepLearningModel 1,146ResponsestoYourFirstDeepLearningProjectinPythonwithKerasStep-By-Step Saurav May27,2016at11:08pm # Theinputlayerdoesn’thaveanyactivationfunction,butstillactivation=”relu”ismentionedinthefirstlayerofthemodel.Why? Reply JasonBrownlee May28,2016at6:32am # HiSaurav, Thefirstlayerinthenetworkhereistechnicallyahiddenlayer,henceithasanactivationfunction. Reply samJohnson December21,2016at2:44am # Whyhaveyoumadeitahiddenlayerthough?theinputlayerisnotusuallyrepresentedasahiddenlayer? Reply JasonBrownlee December21,2016at8:41am # Hisam, Notethisline: model.add(Dense(12,input_dim=8,init='uniform',activation='relu')) 1 model.add(Dense(12,input_dim=8,init='uniform',activation='relu')) Itdoesafewthings. Itdefinestheinputlayerashaving8inputs. Itdefinesahiddenlayerwith12neurons,connectedtotheinputlayerthatusereluactivationfunction. Itinitializesallweightsusingasampleofuniformrandomnumbers. Doesthathelp? Reply Pavidevi May17,2017at2:31am # HiJason, Uhaveusedtwodifferentactivationfunctionssohowcanweknowwhichactivationfunctionfitthemodel? JasonBrownlee May17,2017at8:38am # Sorry,Idon’tunderstandthequestion. MarcoCheung August23,2017at12:51am # HiJason, Iaminterestedindeeplearningandmachinelearning.Youmentioned“Itdefinesahiddenlayerwith12neurons,connectedtotheinputlayerthatusereluactivationfunction.”Iwonderhowcanwedeterminethenumberofneuronsinordertoachieveahighaccuracyrateofthemodel? Thanksalot!!! JasonBrownlee August23,2017at6:55am # Usetrialanderror.Wecannotspecifythe“best”numberofneuronsanalytically.Wemusttest. RamzanShahid November10,2017at4:32am # Sir,thanksforyourtutorial.WouldyouliketomaketutorialonstockDataPredictionthroughNeuralNetworkModelandtrainingthisonanystockdata.Ifyouhaveonthissopleasesharethelink.Thanks JasonBrownlee November10,2017at10:39am # Iamreticenttoposttutorialsonstockmarketpredictiongiventherandomwalkhypothesisofsecurityprices: https://machinelearningmastery.com/gentle-introduction-random-walk-times-series-forecasting-python/ DharaBhavsar August28,2019at9:54pm # Hi, Iwouldliketoknowmoreaboutactivationfunction.Howitisworking?Howmanyactivationfunctions?UsingdifferentactivationfunctionHowmuchaffecttheoutputofthemodel? IwouldliketoalsoknowabouttheHiddenLayer.Howthesizeofthehiddenlayeraffectthemodel? JasonBrownlee August29,2019at6:09am # Inthistutorial,weusereluinthehiddenlayers,learnmorehere: https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/ Thesizeofthelayerimpactsthecapacityofthemodel,learnmorehere: https://machinelearningmastery.com/how-to-control-neural-network-model-capacity-with-nodes-and-layers/ dhani June28,2018at2:44am # hihowusecnnforpixelclassificationonmhdimages Reply JasonBrownlee June28,2018at6:22am # Whatispixelclassification?Whataremhdimages? Reply TanmayKulkarni February11,2020at5:50am # Hello!Iwanttoknowifthere’sawaytoknowthevaluesofallweightsaftereachupdation? Reply JasonBrownlee February11,2020at5:53am # Yes,youcansavethemtofileorreviewthemmanually. Oftensavingisachievedusingacheckpoint: https://machinelearningmastery.com/check-point-deep-learning-models-keras/ Reply BlackBookKeeper August18,2018at10:15pm # runfile(‘C:/Users/Owner/Documents/untitled1.py’,wdir=’C:/Users/Owner/Documents’) Traceback(mostrecentcalllast): File“”,line1,in runfile(‘C:/Users/Owner/Documents/untitled1.py’,wdir=’C:/Users/Owner/Documents’) File“C:\Users\Owner\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py”,line705,inrunfile execfile(filename,namespace) File“C:\Users\Owner\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py”,line102,inexecfile exec(compile(f.read(),filename,‘exec’),namespace) File“C:/Users/Owner/Documents/untitled1.py”,line13,in model.add(Dense(12,input_dim=8,activation=’relu’)) File“C:\Users\Owner\Anaconda3\lib\site-packages\keras\engine\sequential.py”,line160,inadd name=layer.name+‘_input’) File“C:\Users\Owner\Anaconda3\lib\site-packages\keras\engine\input_layer.py”,line177,inInput input_tensor=tensor) File“C:\Users\Owner\Anaconda3\lib\site-packages\keras\legacy\interfaces.py”,line91,inwrapper returnfunc(*args,**kwargs) File“C:\Users\Owner\Anaconda3\lib\site-packages\keras\engine\input_layer.py”,line86,in__init__ name=self.name) File“C:\Users\Owner\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py”,line515,inplaceholder x=tf.placeholder(dtype,shape=shape,name=name) File“C:\Users\Owner\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\ops\array_ops.py”,line1530,inplaceholder returngen_array_ops._placeholder(dtype=dtype,shape=shape,name=name) File“C:\Users\Owner\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\ops\gen_array_ops.py”,line1954,in_placeholder name=name) File“C:\Users\Owner\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\framework\op_def_library.py”,line767,inapply_op op_def=op_def) File“C:\Users\Owner\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\framework\ops.py”,line2508,increate_op set_shapes_for_outputs(ret) File“C:\Users\Owner\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\framework\ops.py”,line1894,inset_shapes_for_outputs output.set_shape(s) File“C:\Users\Owner\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\framework\ops.py”,line443,inset_shape self._shape=self._shape.merge_with(shape) File“C:\Users\Owner\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\framework\tensor_shape.py”,line550,inmerge_with stop=key.stop File“C:\Users\Owner\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\framework\tensor_shape.py”,line798,inas_shape “””ReturnsthisshapeasaTensorShapeProto.””” File“C:\Users\Owner\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\framework\tensor_shape.py”,line431,in__init__ sizeforoneormoredimension.e.g.TensorShape([None,256]) File“C:\Users\Owner\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\framework\tensor_shape.py”,line376,inas_dimension other=as_dimension(other) File“C:\Users\Owner\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\framework\tensor_shape.py”,line32,in__init__ ifvalueisNone: TypeError:int()argumentmustbeastring,abytes-likeobjectoranumber,not‘TensorShapeProto’ thiserroroccurswhen{model.add(Dense(12,input_dim=8,activation=’relu’))}thiscommandisrun anyhelp? Reply JasonBrownlee August19,2018at6:20am # Saveallcodeintoafileandrunitasfollows: https://machinelearningmastery.com/faq/single-faq/how-do-i-run-a-script-from-the-command-line Reply Penchalaiah December8,2019at6:24pm # Fantastictutorial.Theexplanationissimpleandprecise.Thanksalot Reply JasonBrownlee December9,2019at6:47am # Thanks! Reply Loc June29,2022at1:00pm # greatarttist Reply Geoff May29,2016at6:18am # Canyouexplainhowtoimplementweightregularizationintothelayers? Reply JasonBrownlee June15,2016at5:50am # Yep,seehere: http://keras.io/regularizers/ Reply afthab October5,2018at8:32pm # heyyo!!!howurstartcodinginpython Reply JasonBrownlee October6,2018at5:43am # Starthere: https://machinelearningmastery.com/faq/single-faq/how-do-i-get-started-with-python-programming Reply KWC June14,2016at12:08pm # Importstatementsifothersneedthem: fromkeras.modelsimportSequential fromkeras.layersimportDense,Activation Reply JasonBrownlee June15,2016at5:49am # Thanks. IhadtheminPart6,butIhavealsoaddedthemtoPart1. Reply Shiran January20,2020at11:30am # Greatpost! Isitpossibletotrainaneuralnetworkthatreceivesasinputavectorxandtriestopredictanothervectorywherebothxandyarefloats? Reply JasonBrownlee January20,2020at2:07pm # Yes,thisiscalledregression: https://machinelearningmastery.com/regression-tutorial-keras-deep-learning-library-python/ Reply AakashNain June29,2016at6:00pm # Ifthereare8inputsforthefirstlayerthenwhywehavetakenthemas’12’inthefollowingline: model.add(Dense(12,input_dim=8,init=’uniform’,activation=’relu’)) Reply JasonBrownlee June30,2016at6:47am # HiAakash. Theinputlayerisdefinedbytheinput_dimparameter,heresetto8. Thefirsthiddenlayerhas12neurons. Reply Joshua July2,2016at12:04am # Iranyourprogramandihaveanerror: ValueError:couldnotconvertstringtofloat: whatcouldbethereasonforthis,andhowmayIsolveit. thanks. greatpostbytheway. Reply JasonBrownlee July2,2016at6:20am # Itmightbeacopy-pasteerror.Perhapstrytocopyandrunthewholeexamplelistedinsection6? Reply Akash September28,2018at11:12am # Hellosir,IamfacingthesameproblemvalueError:couldnotconvertstringtofloat:‘”6’ alsoIamrunningtheexamplefromsection6. Reply JasonBrownlee September28,2018at3:00pm # Ihavesomesuggestionshere: https://machinelearningmastery.com/faq/single-faq/why-does-the-code-in-the-tutorial-not-work-for-me Reply yashu October5,2018at8:28pm # jasoncanuplzzhelpmehowtocode Reply JasonBrownlee October6,2018at5:42am # Sorry,Icannothelpyoutowritecode. Reply KeyChy July3,2019at5:45pm # Maybewhenyousetallparametersinanextracolumninyour*.csvfile.Thanyouschouldreplacethedelimiterfrom,to;like: dataset=numpy.loadtxt(“pima-indians-diabetes.csv”,delimiter=”;”) ThissolvedtheProblemforme. Reply JasonBrownlee July4,2019at7:40am # Thanksforsharing. Reply cheikhbrahim July5,2016at7:40pm # thankyouforyoursimpleandusefulexample. Reply JasonBrownlee July6,2016at6:22am # You’rewelcomecheikh. Reply NikhilThakur July6,2016at6:39pm # HelloSir,IamtryingtouseKerasforNLP,specificallysentenceclassification.Ihavegiventhemodelbuildingpartbelow.It’stakingquitealottimetoexecute.IamusingPycharmIDE. batch_size=32 nb_filter=250 filter_length=3 nb_epoch=2 pool_length=2 output_dim=5 hidden_dims=250 #Buildthemodel model1=Sequential() model1.add(Convolution1D(nb_filter,filter_length,activation=’relu’,border_mode=’valid’, input_shape=(len(embb_weights),dim),weights=[embb_weights])) model1.add(Dense(hidden_dims)) model1.add(Dropout(0.2)) model1.add(Activation(‘relu’)) model1.add(MaxPooling1D(pool_length=pool_length)) model1.add(Dense(output_dim,activation=’sigmoid’)) sgd=SGD(lr=0.1,decay=1e-6,momentum=0.9,nesterov=True) model1.compile(loss=’mean_squared_error’, optimizer=sgd, metrics=[‘accuracy’]) Reply JasonBrownlee July7,2016at7:31am # Youmaywantalargernetwork.YoumayalsowanttouseastandardrepeatingstructurelikeCNN->CNN->Pool->Dense. SeethispostonusingaCNN: http://machinelearningmastery.com/handwritten-digit-recognition-using-convolutional-neural-networks-python-keras/ Later,youmayalsowanttotrysomestackedLSTMs. Reply AndreNorman July15,2016at10:40am # HiJason,thanksfortheawesomeexample.Giventhattheaccuracyofthismodelis79.56%.Fromhereon,whatstepswouldyoutaketoimprovetheaccuracy? GivenmynascentunderstandingofMachineLearning,myinitialapproachwouldhavebeen: Implementforwardpropagation,thencomputethecostfunction,thenimplementbackpropagation,usegradientcheckingtoevaluatemynetwork(disableafteruse),thenusegradientdescent. However,thisapproachseemsarduouscomparedtousingKeras.Thanksforyourresponse. Reply JasonBrownlee July15,2016at10:52am # HiAndre,indeedKerasmakesworkingwithneuralnetssomucheasier.Funeven! Wemaybemaxingoutonthisproblem,buthereissomegeneraladviceforliftingperformance. –dataprep–trylotsofdifferentviewsoftheproblemandseewhichisbestatexposingthestructureoftheproblemtothelearningalgorithm(datatransforms,featureengineering,etc.) –algorithmselection–trylotsofalgorithmsandseewhichoneorfewarebestontheproblem(tryonallviews) –algorithmtuning–tunewellperformingalgorithmstogetthemostoutofthem(gridsearchorrandomsearchhyperparametertuning) –ensembles–combinepredictionsfrommultiplealgorithms(stacking,boosting,bagging,etc.) Forneuralnets,therearealotofthingstotune,Ithinktherearebiggainsintryingdifferentnetworktopologies(layersandnumberofneuronsperlayer)inconcertwithtrainingepochsandlearningrate(biggernetsneedmoretraining). Ihopethathelpsasastart. Reply AndreNorman July18,2016at7:19am # Awesome!ThanksJason=) Reply JasonBrownlee July18,2016at8:03am # You’rewelcomeAndre. Reply quentin August7,2017at8:41pm # Someinterestingstuffhere https://youtu.be/vq2nnJ4g6N0 Reply JasonBrownlee August8,2017at7:49am # Thanksforsharing.Whatdidyoulikeaboutit? Reply RomillyCocking July21,2016at12:31am # HiJason,it’sagreatexamplebutifanyonerunsitinanIPython/JupyternotebooktheyarelikelytoencounteranI/Oerrorwhenrunningthefitstep.ThisisduetoaknownbuginIPython. Thesolutionistosetverbose=0likethis #Fitthemodel model.fit(X,Y,nb_epoch=40,batch_size=10,verbose=0) Reply JasonBrownlee July21,2016at5:36am # Great,thanksforsharingRomilly. Reply Anirban July23,2016at10:20pm # Greatexample.Haveaquerythough.HowdoInowgiveainputandgettheoutput(0or1).Canyouplsgivethecmdforthat. Thanks Reply JasonBrownlee July24,2016at6:53am # Youcancallmodel.predict()togetpredictionsandroundoneachvaluetosnaptoabinaryvalue. Forexample,belowisacompleteexampleshowingyouhowtoroundthepredictionsandprintthemtoconsole. #CreatefirstnetworkwithKeras fromkeras.modelsimportSequential fromkeras.layersimportDense importnumpy #fixrandomseedforreproducibility seed=7 numpy.random.seed(seed) #loadpimaindiansdataset dataset=numpy.loadtxt("pima-indians-diabetes.csv",delimiter=",") #splitintoinput(X)andoutput(Y)variables X=dataset[:,0:8] Y=dataset[:,8] #createmodel model=Sequential() model.add(Dense(12,input_dim=8,init='uniform',activation='relu')) model.add(Dense(8,init='uniform',activation='relu')) model.add(Dense(1,init='uniform',activation='sigmoid')) #Compilemodel model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) #Fitthemodel model.fit(X,Y,nb_epoch=150,batch_size=10,verbose=2) #calculatepredictions predictions=model.predict(X) #roundpredictions rounded=[round(x)forxinpredictions] print(rounded) 1234567891011121314151617181920212223242526 #CreatefirstnetworkwithKerasfromkeras.modelsimportSequentialfromkeras.layersimportDenseimportnumpy#fixrandomseedforreproducibilityseed=7numpy.random.seed(seed)#loadpimaindiansdatasetdataset=numpy.loadtxt("pima-indians-diabetes.csv",delimiter=",")#splitintoinput(X)andoutput(Y)variablesX=dataset[:,0:8]Y=dataset[:,8]#createmodelmodel=Sequential()model.add(Dense(12,input_dim=8,init='uniform',activation='relu'))model.add(Dense(8,init='uniform',activation='relu'))model.add(Dense(1,init='uniform',activation='sigmoid'))#Compilemodelmodel.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])#Fitthemodelmodel.fit(X,Y,nb_epoch=150,batch_size=10,  verbose=2)#calculatepredictionspredictions=model.predict(X)#roundpredictionsrounded=[round(x)forxinpredictions]print(rounded) Reply Debanjan March27,2017at12:04pm # Hi,Whyyouarenotusinganytestset?Youarepredictingfromthetrainingset,Ithink. Reply JasonBrownlee March28,2017at8:19am # Correct,itisjustanexampletogetyoustartedwithKeras. Reply David June26,2017at12:24am # Jason,I’mnotquiteunderstandinghowthepredictedvalues([1.0,0.0,1.0,0.0,1.0,…)maptotherealworldproblem.Forinstance,whatdoesthatfirst“1.0”intheresultsindicate? Igetthatit’sapredictionof‘true’fordiabetes…buttowhichpatientisitpredictingthat—thefirstinthelist?Sothenthesecondresult,“0.0,”isthepredictionforthesecondpatient/rowinthedataset? Reply JasonBrownlee June26,2017at6:08am # Remembertheoriginalfilehas0and1valuesinthefinalclasscolumnwhere0isnoonsetofdiabetesand1isanonsetofdiabetes. Wearepredictingnewvaluesinthiscolumn. Wearemakingpredictionsforspecialrows,wepassintheirmedicalinfoandpredicttheonsetofdiabetes.Wejusthappentodothisforanumberofrowsatatime. Reply ami July16,2018at4:30pm # hellojason iamgettingthiserrorwhilecalculatingthepredictions. #calculatepredictions predictions=model.predict(X) #roundpredictions rounded=[round(x)forxinpredictions] print(rounded) ————————————————————————— TypeErrorTraceback(mostrecentcalllast) in() 2predictions=model.predict(X) 3#roundpredictions —->4rounded=[round(x)forxinpredictions] 5print(rounded) in(.0) 2predictions=model.predict(X) 3#roundpredictions —->4rounded=[round(x)forxinpredictions] 5print(rounded) TypeError:typenumpy.ndarraydoesn’tdefine__round__method JasonBrownlee July17,2018at6:09am # Tryremovingthecalltoround(). Rachel June28,2017at8:28pm # HiJason, CanIaskwhyyouusethesamedataXyoufitthemodeltodotheprediction? #Fitthemodel model.fit(X,Y,epochs=150,batch_size=10,verbose=2) #calculatepredictions predictions=model.predict(X) Rachel Reply JasonBrownlee June29,2017at6:34am # ItisallIhaveathand.Xmeansdatamatrix. ReplaceXinpredict()withXprimeorwhateveryoulike. Reply jitendra March27,2018at7:20pm # hii,howwillifeedtheinput(8,125,96,0,0,0.0,0.232,54)togetouroutput. predictions=model.predict(X) imeaninseadofXiwanttogetoutputof8,125,96,0,0,0.0,0.232,54. Reply JasonBrownlee March28,2018at6:24am # Wrapyourinputinanarray,n-columnswithonerow,thenpassthattothemodel. Doesthathelp? Reply Roman October5,2018at11:22pm # Hello,tryingtousepredictionsonsimilarneuralnetworkbutkeepgettingerrorsthatinputdimensionhasothershape. Canyousayhowarraymustlookonexampledneuralnetwork? JasonBrownlee October6,2018at5:45am # ForanMLP,datamustbeorganizedintoa2darrayofsamplesxfeatures Anirban July23,2016at10:52pm # Iamnotabletogettothelastepoch.Gettingerrorbeforethat: Epoch11/150 390/768[==============>……………]Traceback(mostrecentcalllast):.6921 ValueError:I/Ooperationonclosedfile Icouldresolvethisbyvaryingtheepochandbatchsize. Nowtopredictaunknownvalue,iloadedanewdatasetandusedpredictcmdasbelow: dataset_test=numpy.loadtxt(“pima-indians-diabetes_test.csv”,delimiter=”,”)–hasonlyonerow X=dataset_test[:,0:8] model.predict(X) ButIamgettingerror: X=dataset_test[:,0:8] IndexError:toomanyindicesforarray Canyouhelppls. Thanks Reply JasonBrownlee July24,2016at6:55am # IseeproblemslikethiswhenyourunfromanotebookorfromanIDE. Considerrunningexamplesfromtheconsoletoensuretheywork. Considertuningoffverboseoutput(verbose=0inthecalltofit())todisabletheprogressbar. Reply DavidKluszczynski July28,2016at12:42am # HiJason! Lovedthetutorial!Ihaveaquestionhowever. Isthereawaytosavetheweightstoafileafterthemodelistrainedforuses,suchaskaggle? Thanks, David Reply JasonBrownlee July28,2016at5:47am # ThanksDavid. Youcansavethenetworkweightstofilebycallingmodel.save_weights(“model.h5”) Youcanlearnmoreinthispost: http://machinelearningmastery.com/save-load-keras-deep-learning-models/ Reply AlexHopper July29,2016at5:45am # Hey,Jason!Thankyoufortheawesometutorial!I’veuseyourtutorialtolearnaboutCNN.Ihaveonequestionforyou…SupposingIwanttouseKerastoclassicateimagesandIhave3ormoreclassestoclassify,Howcouldmyalgorithmknowaboutthisclasses?Youknow,Ihavetocodewhatisacat,adogandahorse.Isthereanywaytocodethis?I’vetriedit: target_names=[‘class0(Cats)’,‘class1(Dogs)’,‘class2(Horse)’] print(classification_report(np.argmax(Y_test,axis=1),y_pred,target_names=target_names)) Butmyresultsarenotclassifyingcorrectly. precisionrecallf1-scoresupport class0(Cat)0.000.000.0017 class1(Dog)0.000.000.0014 class2(Horse)0.991.000.992526 avg/total0.980.990.982557 Reply JasonBrownlee July29,2016at6:41am # GreatquestionAlex. Thisisanexampleofamulti-classclassificationproblem.Youmustuseaonehotencodingontheoutputvariabletobeabletomodelitwithaneuralnetworkandspecifythenumberofclassesasthenumberofoutputsonthefinallayerofyournetwork. Iprovideatutorialwiththefamousirisdatasetthathas3outputclasseshere: http://machinelearningmastery.com/multi-class-classification-tutorial-keras-deep-learning-library/ Reply AlexHopper August1,2016at1:22am # Thankyou. I’llcheckit. Reply JasonBrownlee August1,2016at6:25am # NoproblemAlex. Reply Anonymouse August2,2016at11:28pm # Thiswasreallyuseful,thankyou I’musingkeras(withCNNs)forsentimentclassificationofdocumentsandI’dliketoimprovetheperformance,butI’mcompletelyatalosswhenitcomestotuningtheparametersinanon-arbitraryway.Couldyoumaybepointmesomewherethatwillhelpmegoaboutthisinamoresystematicfashion?Theremustbesomeheuristicsorrules-of-thumbthatcouldguideme. Reply JasonBrownlee August3,2016at8:09am # Ihaveatutorialcomingoutsoon(nextweek)thatprovidelotsofexamplesoftuningthehyperparametersofaneuralnetworkinKeras,butlimitedtoMLPs. ForCNNs,Iwouldadvisetuningthenumberofrepeatinglayers(conv+maxpool),thenumberoffiltersinrepeatingblock,andthenumberandsizeofdenselayersatthepredictingpartofyournetwork.Alsoconsiderusingsomefixedlayersfrompre-trainedmodelsasthestartofyournetwork(e.g.VGG)andtryjusttrainingsomeinputandoutputlayersarounditforyourproblem. Ihopethathelpsasastart. Reply Shopon August14,2016at5:04pm # HelloJason,MyAccuracyis:0.0104,butyoursis0.7879andmylossis:-9.5414.Isthereanyproblemwiththedataset?Idownloadedthedatasetfromadifferentsite. Reply JasonBrownlee August15,2016at12:36pm # Ithinktheremightbesomethingwrongwithyourimplementationoryourdataset.Yournumbersarewayout. Reply mohamed August15,2016at9:30am # aftertraining,howicanusethetrainedmodelonnewsample Reply JasonBrownlee August15,2016at12:36pm # Youcancallmodel.predict() Seeanabovecommentforaspecificcodeexample. Reply OmachiOkolo August16,2016at10:21pm # HiJason, i’mastudentconductingaresearchonhowtouseartificialneuralnetworktopredictthebusinessviabilityofpotentialsoftwareprojects. Iintendtousepythonasaprogramminglanguage.TheapplicationofANNfascinatesmebuti’mnewtomachinelearningandpython.Canyouhelpsuggesthowtogoaboutthis. Manythanks Reply JasonBrownlee August17,2016at9:51am # Considergettingagoodgroundinginhowtoworkthroughamachinelearningproblemendtoendinpythonfirst. Hereisagoodtutorialtogetyoustarted: http://machinelearningmastery.com/machine-learning-in-python-step-by-step/ Reply Agni August17,2016at6:23am # DearJeson,thisisagreattutorialforbeginners.Itwillsatisfytheneedofmanystudentswhoarelookingfortheinitialhelp.ButIhaveaquestion.Couldyoupleaselightonafewthings:i)howtotestthetrainedmodelusingtestdataset(i.e.,loadingoftestdatasetandappliedthemodelandsupposethetestfilenameistest.csv)ii)printtheaccuracyobtainedontestdatasetiii)theo/phasmorethan2class(suppose4-classclassificationproblem). Pleaseshowthewholeprogramtoovercomeanyconfusion. Thanksalot. Reply JasonBrownlee August17,2016at10:03am # Iprovideanexampleelsewhereinthecomments,youcanalsoseehowtomakepredictionsonnewdatainthispost: http://machinelearningmastery.com/5-step-life-cycle-neural-network-models-keras/ Foranexampleofmulti-classclassification,youcanseethistutorial: http://machinelearningmastery.com/multi-class-classification-tutorial-keras-deep-learning-library/ Reply DoronVetlzer August17,2016at9:29am # IamtryingtobuildaNeuralNetworkwithsomerecursiveconnectionsbutnotafullrecursivelayer,howdoIdothisinKeras? Reply DoronVetlzer August17,2016at9:31am # IcouldprintadiagramofthenetworkbutwhatIwantBasicallyisthateachneuroninthecurrenttimeframetoknowonlyitsownpreviousoutputandnottheoutputofalltheneuronsintheoutputlayer. Reply JasonBrownlee August17,2016at10:04am # Idon’tknowoffhandDoron. Reply DoronVeltzer August23,2016at2:28am # Thanksforreplyingthough,haveagoodday. Reply sairam August30,2016at8:49am # HelloJason, Thisisagreattutorial.Thanksforsharing. Iamhavingadatasetof100fingerprintsandiwanttoextractminutiaeof100fingerprintsusingpython(Keras).Canyoupleaseadvisewheretostart?Iamreallyconfused. Reply JasonBrownlee August31,2016at8:43am # Ifyourfingerprintsareimages,youmaywanttoconsiderusingconvolutionalneuralnetworks(CNNs)thataremuchbetteratworkingimagedata. Seethistutorialondigitrecognitionforastart: http://machinelearningmastery.com/handwritten-digit-recognition-using-convolutional-neural-networks-python-keras/ Reply padmashri July6,2017at10:12pm # HiJason Thanksforthisgreattutorial,iamnewtomachinelearningiwentthroughyourbasictutorialonkerasandalsohandwritten-digit-recognition.Iwouldliketounderstandhowicantrainasetofimagedata,foreg.thesetofimagedatacanbesomethinglikesquare,circle,pyramid. pl.letmeknowhowtheinputdataneedstofedtotheprogramandhowweneedtoexportthemodel. Reply JasonBrownlee July9,2017at10:30am # Startbypreparingahigh-qualitydataset. Reply CM September1,2016at4:23pm # HiJason, Thanksforthegreatarticle.ButIhad1query. ArethereanyinbuiltfunctionsinkerasthatcangivemethefeatureimportancefortheANNmodel? Ifnot,canyousuggestatechniqueIcanusetoextractvariableimportancefromthelossfunction?IamconsideringanapproachsimilartothatusedinRFwhichinvolvespermutingthevaluesoftheselectedvariableandcalculatingtherelativeincreaseinloss. Regards, CM Reply JasonBrownlee September2,2016at8:07am # Idon’tbelievesoCM. Iwouldsuggestusingawrappermethodandevaluatesubsetsoffeaturestodevelopafeatureimportance/featureselectionreport. Italkalotmoreaboutfeatureselectioninthispost: http://machinelearningmastery.com/an-introduction-to-feature-selection/ Iprovideanexampleoffeatureselectioninscikit-learnhere: http://machinelearningmastery.com/feature-selection-machine-learning-python/ Ihopethathelpsasastart. Reply MineshJethva May15,2017at7:49pm # haveyoudevelopanyprogressforthisapproach?Ialsohavesameproblem. Reply Kamal September7,2016at2:09am # DearJason,IamnewtoDeeplearning.Beinganovice,Iamaskingyouatechnicalquestionwhichmayseemsilly.Myquestionisthat-canweusefeatures(forexamplelengthofthesentenceetc.)ofasentencewhileclassifyingasentence(supposetheo/pare+vesentenceand-vesentence)usingdeepneuralnetwork? Reply JasonBrownlee September7,2016at10:27am # GreatquestionKamal,yesyoucan.Iwouldencourageyoutoincludeallsuchfeaturesandseewhichgiveyouabumpinperformance. Reply Saurabh September11,2016at12:42pm # Hi,HowwouldIusethisonadatasetthathasmultipleoutputs?ForexampleadatasetwithoutputAandBwhereAcouldbe0or1andBcouldbe3or4? Reply JasonBrownlee September12,2016at8:30am # Youcouldusetwoneuronsintheoutputlayerandnormalizetheoutputvariablestobothbeintherangeof0to1. Thistutorialonmulti-classclassificationmightgiveyousomeideas: http://machinelearningmastery.com/multi-class-classification-tutorial-keras-deep-learning-library/ Reply Tom_P September17,2016at1:47pm # HiJason, ThetutoriallooksreallygoodbutunfortunatelyIkeepgettinganerrorwhenimportingDensefromkeras.layers,Igettheerror:AttributeError:module‘theano’hasnoattribute‘gof’ IhavetriedreinstallingTheanobutithasnotfixedtheissue. Bestwishes Tom Reply JasonBrownlee September18,2016at7:57am # HiTom,sorrytohearthat.Ihavenotseenthisproblembefore. Haveyousearchedgoogle?Icanseeafewpostsanditmightberelatedtoyourversionofscipyorsimilar. Letmeknowhowyougo. Reply shudhan September21,2016at5:54pm # HeyJason, Canyoupleasemakeatutorialonhowtoaddadditionaltraindataintothealreadytrainedmodel?Thiswillbehelpfulforthebiggerdatasets.Ireadthatwarmstartisusedforrandomforest.Butnotsurehowtoimplementasalgorithm.Ageneralisedversionofhowtoimplementwouldbegood.ThankYou! Reply JasonBrownlee September22,2016at8:08am # GreatquestionShudhan! Yes,youcouldsaveyourweights,loadthemlaterintoanewnetworktopologyandstarttrainingonnewdataagain. I’llworkoutanexampleincomingweeks,timepermitting. Reply Joanna September22,2016at1:09am # HiJason, firstofallcongratulationsforthisamazingworkthatyouhavedone! Hereismyquestion: Whataboutifmy.csvfileincludesalsobothnominalandnumericalattributes? ShouldIchangemynominalvaluestonumerical? Thankyouinadvance Reply JasonBrownlee September22,2016at8:19am # HiJoanna,yes. Youcanusealabelencodertoconvertnominaltointeger,andthenevenconverttheintegertoonehotencoding. Thispostwillgiveyoucodeyoucanuse: http://machinelearningmastery.com/data-preparation-gradient-boosting-xgboost-python/ Reply ATM October2,2016at5:47am # Asmallbug:- Line25:rounded=[round(x)forxinpredictions] shouldhavenumpy.roundinstead,forthecodetorun! Greattutorial,regardless.Thebesti’veseenforintrotoANNinpython.Thanks! Reply JasonBrownlee October2,2016at8:20am # Perhapsit’syourversionofPythonorenvironment? InPython2.7theround()functionisbuilt-in. Reply AC January14,2017at2:11am # Ifthereiscommentforpython3,shouldbebetter. #useunmpy.roundinstead,ifusingpython3, Reply JasonBrownlee January15,2017at5:24am # ThanksforthenoteAC. Reply Ash October9,2016at1:36am # Thisissimpletograsp!Greatpost!Howcanweperformdropoutinkeras? Reply JasonBrownlee October9,2016at6:49am # ThanksAsh. YoucanlearnaboutdropoutwithKerashere: http://machinelearningmastery.com/dropout-regularization-deep-learning-models-keras/ Reply HomagniSaha October14,2016at4:15am # HelloJason, Youareusingmodel.predictintheendtopredicttheresults.Isitpossibletosavethemodelsomewhereintheharddiskandtransferittoanothermachine(turtlebotrunningonROSformyinstance)andthenusethemodeldirectlyonturtlebottopredicttheresults? Pleasetellmehow Thankingyou HomagniSaha Reply JasonBrownlee October14,2016at9:07am # HiHomagni,greatquestion. Absolutely! LearnexactlyhowinthistutorialIwrote: http://machinelearningmastery.com/save-load-keras-deep-learning-models/ Reply Rimi October16,2016at8:21pm # HiJason, Iimplementedyoucodetobeginwith.ButIamgettinganaccuracyof45.18%withthesameparametersandeverything. Cantfigureoutwhy. Thanks Reply JasonBrownlee October17,2016at10:29am # TheredoessoundlikeaproblemthereRimi. Confirmthecodeanddatamatchexactly. Reply Ankit October26,2016at8:12pm # HiJason, Iamlittleconfusedwithfirstlayerparameters.Yousaidthatfirstlayerhas12neuronsandexpects8inputvariables. Whythereisadifferencebetweennumberofneurons,input_dimforfirstlayer. Regards, Ankit Reply JasonBrownlee October27,2016at7:45am # HiAnkit, Theproblemhas8inputvariablesandthefirsthiddenlayerhas12neurons.Inputsarethecolumnsofdata,thesearefixed.TheHiddenlayersingeneralarewhateverwedesignbasedonwhatevercapacitywethinkweneedtorepresentthecomplexityoftheproblem.Inthiscase,wehavechosen12neuronsforthefirsthiddenlayer. Ihopethatisclearer. Reply Tom October27,2016at3:04am # Hi, Ihaveadata,IRISlikedatabutwithmorecolmuns. IwanttouseMLPandDBN/CNNClassifier(oranyotherDeepLearningclassificaitonalgorithm)onmydatatoseehowcorrectlyitdoesclassifiedinto6groups. PreviouslyusingDEEPLEARNINGFORJ,todayfirsttimeseeKERAS. doesKERAShasexamples(codeexamples)ofDLClassificationalgorithms? Kindly, Tom Reply JasonBrownlee October27,2016at7:48am # YesTom,theexampleinthispostisanexampleofaneuralnetwork(deeplearning)appliedtoaclassificationproblem. Reply Rumesa October30,2016at1:57am # Ihaveinstalledtheanobutitgivesmetheerroroftensorflow.isitmendatorytoinstallbothpackages?becausetensorflowisnotsupportedonwndows.theonlywaytogetitonwindowsistoinstallvirtualmachine Reply JasonBrownlee October30,2016at8:57am # KeraswillworkjustfinewithTheano. JustinstallTheano,andconfigureKerastousetheTheanobackend. MoreinformationaboutconfiguringtheKerasbackendhere: http://machinelearningmastery.com/introduction-python-deep-learning-library-keras/ Reply Rumesa October31,2016at4:36am # heyjasonIhaverunyourcodebutgotthefollowingerror.AlthoughIhaveareadyinstalledtheanobackend.helpmeout.Ijuststuck. UsingTensorFlowbackend. Traceback(mostrecentcalllast): File“C:\Users\pc\Desktop\first.py”,line2,in fromkeras.modelsimportSequential File“C:\Users\pc\Anaconda3\lib\site-packages\keras\__init__.py”,line2,in from.importbackend File“C:\Users\pc\Anaconda3\lib\site-packages\keras\backend\__init__.py”,line64,in from.tensorflow_backendimport* File“C:\Users\pc\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py”,line1,in importtensorflowastf ImportError:Nomodulenamed‘tensorflow’ >>> Reply JasonBrownlee October31,2016at5:34am # ChangethebackendusedbyKerasfromTensorFlowtoTheano. YoucandothiseitherbyusingthecommandlineswitchorchangingtheKerasconfigfile. SeethelinkIpostedinthepreviouspostforinstructions. Reply Maria January6,2017at1:05pm # HelloRumesa! Haveyousolvedyourproblem?Ihavethesameone.Everywhereisthesameanswerwithkeras.jsonfileorenvirinmentvariablebutitdoesn’twork.Canyoutellmewhathaveworkedforyou? Reply JasonBrownlee January7,2017at8:20am # Interesting. Maybethereisanissuewiththelatestversionandatightcouplingtotensorflow?Ihavenotseenthismyself. PerhapsitmightbeworthtestingpriorversionsofKeras,suchas1.1.0? Trythis: pipinstall--upgrade--no-depskeras==1.1.0 1 pipinstall--upgrade--no-depskeras==1.1.0 Reply Alexon November1,2016at6:54am # HiJason, Firstoff,thankssomuchforcreatingtheseresources,Ihavebeenkeepinganeyeonyournewsletterforawhilenow,andIfinallyhavethefreetimetostartlearningmoreaboutitmyself,soyourworkhasbeenreallyappreciated. Myquestionis:HowcanIset/gettheweightsofeachhiddennode? Iamplanningtocreateseveralarraysrandomizedweights,thenuseageneticalgorithmtoseewhichweightarrayperformsthebestandimproveovergenerations.Howwouldbethebestwaytogoaboutthis,andifIusea“relu”activationfunction,amIrightinthinkingtheserandomlygeneratedweightsshouldbebetween0and0.05? Manythanksforyourhelp🙂 Alexon Reply JasonBrownlee November1,2016at8:05am # ThanksAlexon, Youcangetandsettheweightsfromanetwork. Youcanlearnmoreabouthowtodothisinthecontextofsavingtheweightstofilehere: http://machinelearningmastery.com/save-load-keras-deep-learning-models/ Ihopethathelpsasastart,I’dlovetohearhowyougo. Reply Alexon November6,2016at6:36am # Thatsgreat,thanksforpointingmeintherightdirection. I’dbehappytoletyouknowhowitgoes,butmighttakeawhileasthisisverymucha“whenIcanfindthetime”projectbetweenjobs🙂 Cheers! Reply ArnaldoGunzi November2,2016at10:17pm # Niceintroduction,thanks! Reply JasonBrownlee November3,2016at7:59am # I’mgladyoufounditusefulArnaldo. Reply Abbey November14,2016at11:05pm # Goodday Ihaveaquestion,howcanIrepresentacharacterasavectorthatcouldbeaninputfortheneuralnetworktopredictthewordmeaningandtrainedusingLSTM Forinstance,IhavebftopredictboyfriendorbestfriendandsimilarlyIhave2mortopredicttomorrow.Ineedtoencodealltheinputasacharacterrepresentedasvector,sothatitcanbetrainwithRNN/LSTMtopredicttheoutput. Thankyou. KindRegards Reply JasonBrownlee November15,2016at7:54am # HiAbbey,Youcanmapcharacterstointegerstogetintegervectors. Reply Abbey November15,2016at6:17pm # ThankyouJason,ifimapcharacterstointegersvaluetogetvectorsusingEnglishAlphabets,numbersandspecialcharacters ThequestionishowwillLSTMpredictthecharacter.Pleaseexampleinmoredetailsforme. Regards Reply JasonBrownlee November16,2016at9:27am # HiAbbey, Ifyouroutputvaluesarealsocharacters,youcanmapthemontointegers,andreversethemappingtoconvertthepredictionsbacktotext. Reply Abbey November16,2016at8:39pm # Theoutputvalueofthecharactersencodingwillbetext Abbey November15,2016at6:22pm # Thankyou,Jason,ifImapcharacterstointegersvaluetogetvectorsrepresentationoftheinformaltextusingEnglishAlphabets,numbersandspecialcharacters ThequestionishowwillLSTMpredictthecharacterorwordsthathaveclosemeaningtotheinputvalue.Pleaseexampleinmoredetailsforme.IunderstandhowRNN/LSTMworkbasedonyourtutorialexamplebutthelogicindesigningprocessingiswhatIamstresswith. Regards Reply Ammar November27,2016at10:35am # hiJason, iamtryingtoimplementCNNonedimentiononmydata.so,ibluitmynetwork. theissueis: deftrain_model(model,X_train,y_train,X_test,y_test): X_train=X_train.reshape(-1,1,41) X_test=X_test.reshape(-1,1,41) numpy.random.seed(seed) model.fit(X_train,y_train,validation_data=(X_test,y_test),nb_epoch=100,batch_size=64) #Finalevaluationofthemodel scores=model.evaluate(X_test,y_test,verbose=0) print(“Accuracy:%.2f%%”%(scores[1]*100)) thismethodabovedoesnotworkanddoesnotgivemeanyerrormessage. couldyouhelpmewiththisplease? Reply JasonBrownlee November28,2016at8:40am # HiAmmar,I’msurprisedthatthereisnoerrormessage. Perhapsrunfromthecommandlineandaddsomeprint()statementstoseeexactlywhereitstops. Reply KK November28,2016at6:55pm # HiJason Greatwork.Ihaveanotherdoubt.Howcanweapplythistotextmining.Ihaveacsvfilecontainingreviewdocumentandlabel.Iwanttoapplyclassifythedocumentsbasedonthetextavailable.CanUdothisfavor. Reply JasonBrownlee November29,2016at8:48am # IwouldrecommendconvertingthecharstointsandthenusinganEmbeddinglayer. Reply AlexM November30,2016at10:52pm # MrJason,thisisgreattutorialbutIamstackwithsomeerrors. FirstIcan’tloaddatasetcorrectly,triedtocorrecterrorbutcan’tmakeit.(FileNotFoundError:[Errno2]Nosuchfileordirectory:‘pima-indians-diabetes.csv’). Second:Whiletryingtoevaluatethemodelitsays(Xisnotdefined)Maybethisisbecauseuploadingfailed. Thanks! Reply JasonBrownlee December1,2016at7:29am # YouneedtodownloadthefileandplaceitinyourcurrentworkingdirectoryAlex. Doesthathelp? Reply AlexM December1,2016at6:45pm # Sir,itisnowsuccessful…. Thanks! Reply JasonBrownlee December2,2016at8:15am # GladtohearitAlex. Reply Bappaditya December2,2016at7:35pm # HiJason, Firstofallaspecialthankstoyouforprovidingsuchagreattutorial.Iamverynewtomachinelearningandtrulyspeakingihadnobackgroundindatascience.TheconceptofMLoverwhelmedmeandnowihaveadesiretobeanexpertofthisfield.Ineedyouradvicetostartfromascratch.AlsoiamaPhDstudentinComputerEngineering(computerhardware)andiwanttoapplyitasatoolforfaultdetectionandtestingforICs.Canyouprovidemesomereferencesonthisfield? Reply JasonBrownlee December3,2016at8:29am # HiBappaditya, Mybestadviceforgettingstartedishere: http://machinelearningmastery.com/start-here/#getstarted Ibelievemachinelearninganddeeplearningaregoodtoolsforuseonproblemsinfaultdetection.Agoodplacetofindreferencesisherehttp://scholar.google.com Bestofluckwithyourproject. Reply AlexM December3,2016at8:00pm # Wellasusualinourdailycodinglifeerrorshappen,nowIhavethiserrorhowcanIcorrectit?Thanks! ”————————————————————————— NoBackendErrorTraceback(mostrecentcalllast) in() 16importlibrosa.display 17audio_path=(‘/Users/MA/PythonNotebook/OK.mp3’) —>18y,sr=librosa.load(audio_path) C:\Users\MA\Anaconda3\lib\site-packages\librosa\core\audio.pyinload(path,sr,mono,offset,duration,dtype) 107 108y=[] –>109withaudioread.audio_open(os.path.realpath(path))asinput_file: 110sr_native=input_file.samplerate 111n_channels=input_file.channels C:\Users\MA\Anaconda3\lib\site-packages\audioread\__init__.pyinaudio_open(path) 112 113#Allbackendsfailed! –>114raiseNoBackendError() NoBackendError: ” ThatistheerrorIamgettingjustwhentryingtoloadasongintolibrosa… Thanks!!@JasonBrownlee Reply JasonBrownlee December4,2016at5:30am # Sorry,thislookslikeanissuewithyourlibrosalibrary,notamachinelearningissue.Ican’tgiveyouexpertadvice,sorry. Reply AlexM December4,2016at10:30pm # ThanksIhavemanagedtocorrecttheerror… HappySundaytoyouall…… Reply JasonBrownlee December5,2016at6:49am # GladtohearitAlex. Reply ayush June19,2018at3:27am # howdidyousolvedtheproblem? Reply Lei December4,2016at10:52pm # Hi,Jason,thankyouforyouramazingexamples. Irunthesamecodeonmylaptop.ButIdidnotgetthesameresults.Whatcouldbethepossiblereasons? Iamusingwindows8.164bit+eclipse+anaconda4.2+theano0.9.4+CUDA7.5 Igotresultslikefollows. …… Epoch145/150 10/768[…………………………]–ETA:0s–loss:0.3634–acc:0.8000 80/768[==>………………………]–ETA:0s–loss:0.4066–acc:0.7750 150/768[====>…………………….]–ETA:0s–loss:0.4059–acc:0.8067 220/768[=======>………………….]–ETA:0s–loss:0.4047–acc:0.8091 300/768[==========>……………….]–ETA:0s–loss:0.4498–acc:0.7867 380/768[=============>…………….]–ETA:0s–loss:0.4595–acc:0.7895 450/768[================>………….]–ETA:0s–loss:0.4568–acc:0.7911 510/768[==================>………..]–ETA:0s–loss:0.4553–acc:0.7882 580/768[=====================>……..]–ETA:0s–loss:0.4677–acc:0.7776 660/768[========================>…..]–ETA:0s–loss:0.4697–acc:0.7788 740/768[===========================>..]–ETA:0s–loss:0.4611–acc:0.7838 768/768[==============================]–0s–loss:0.4614–acc:0.7799 Epoch146/150 10/768[…………………………]–ETA:0s–loss:0.3846–acc:0.8000 90/768[==>………………………]–ETA:0s–loss:0.5079–acc:0.7444 170/768[=====>……………………]–ETA:0s–loss:0.4500–acc:0.7882 250/768[========>…………………]–ETA:0s–loss:0.4594–acc:0.7840 330/768[===========>………………]–ETA:0s–loss:0.4574–acc:0.7818 400/768[==============>……………]–ETA:0s–loss:0.4563–acc:0.7775 470/768[=================>…………]–ETA:0s–loss:0.4654–acc:0.7723 540/768[====================>………]–ETA:0s–loss:0.4537–acc:0.7870 620/768[=======================>……]–ETA:0s–loss:0.4615–acc:0.7806 690/768[=========================>….]–ETA:0s–loss:0.4631–acc:0.7739 750/768[============================>.]–ETA:0s–loss:0.4649–acc:0.7733 768/768[==============================]–0s–loss:0.4636–acc:0.7734 Epoch147/150 10/768[…………………………]–ETA:0s–loss:0.3561–acc:0.9000 90/768[==>………………………]–ETA:0s–loss:0.4167–acc:0.8556 170/768[=====>……………………]–ETA:0s–loss:0.4824–acc:0.8059 250/768[========>…………………]–ETA:0s–loss:0.4534–acc:0.8080 330/768[===========>………………]–ETA:0s–loss:0.4679–acc:0.7848 400/768[==============>……………]–ETA:0s–loss:0.4590–acc:0.7950 460/768[================>………….]–ETA:0s–loss:0.4619–acc:0.7913 530/768[===================>……….]–ETA:0s–loss:0.4562–acc:0.7868 600/768[======================>…….]–ETA:0s–loss:0.4497–acc:0.7883 680/768[=========================>….]–ETA:0s–loss:0.4525–acc:0.7853 760/768[============================>.]–ETA:0s–loss:0.4568–acc:0.7803 768/768[==============================]–0s–loss:0.4561–acc:0.7812 Epoch148/150 10/768[…………………………]–ETA:0s–loss:0.4183–acc:0.9000 80/768[==>………………………]–ETA:0s–loss:0.3674–acc:0.8750 160/768[=====>……………………]–ETA:0s–loss:0.4340–acc:0.8250 240/768[========>…………………]–ETA:0s–loss:0.4799–acc:0.7583 320/768[===========>………………]–ETA:0s–loss:0.4648–acc:0.7719 400/768[==============>……………]–ETA:0s–loss:0.4596–acc:0.7775 470/768[=================>…………]–ETA:0s–loss:0.4475–acc:0.7809 540/768[====================>………]–ETA:0s–loss:0.4545–acc:0.7778 620/768[=======================>……]–ETA:0s–loss:0.4590–acc:0.7742 690/768[=========================>….]–ETA:0s–loss:0.4769–acc:0.7652 760/768[============================>.]–ETA:0s–loss:0.4748–acc:0.7658 768/768[==============================]–0s–loss:0.4734–acc:0.7669 Epoch149/150 10/768[…………………………]–ETA:0s–loss:0.3043–acc:0.9000 90/768[==>………………………]–ETA:0s–loss:0.4913–acc:0.7111 170/768[=====>……………………]–ETA:0s–loss:0.4779–acc:0.7588 250/768[========>…………………]–ETA:0s–loss:0.4794–acc:0.7640 320/768[===========>………………]–ETA:0s–loss:0.4957–acc:0.7562 370/768[=============>…………….]–ETA:0s–loss:0.4891–acc:0.7703 450/768[================>………….]–ETA:0s–loss:0.4737–acc:0.7867 520/768[===================>……….]–ETA:0s–loss:0.4675–acc:0.7865 600/768[======================>…….]–ETA:0s–loss:0.4668–acc:0.7833 680/768[=========================>….]–ETA:0s–loss:0.4677–acc:0.7809 760/768[============================>.]–ETA:0s–loss:0.4648–acc:0.7803 768/768[==============================]–0s–loss:0.4625–acc:0.7826 Epoch150/150 10/768[…………………………]–ETA:0s–loss:0.2751–acc:1.0000 100/768[==>………………………]–ETA:0s–loss:0.4501–acc:0.8100 170/768[=====>……………………]–ETA:0s–loss:0.4588–acc:0.8059 250/768[========>…………………]–ETA:0s–loss:0.4299–acc:0.8200 310/768[===========>………………]–ETA:0s–loss:0.4298–acc:0.8129 380/768[=============>…………….]–ETA:0s–loss:0.4365–acc:0.8053 460/768[================>………….]–ETA:0s–loss:0.4469–acc:0.7957 540/768[====================>………]–ETA:0s–loss:0.4436–acc:0.8000 620/768[=======================>……]–ETA:0s–loss:0.4570–acc:0.7871 690/768[=========================>….]–ETA:0s–loss:0.4664–acc:0.7783 760/768[============================>.]–ETA:0s–loss:0.4617–acc:0.7789 768/768[==============================]–0s–loss:0.4638–acc:0.7773 32/768[>………………………..]–ETA:0s 448/768[================>………….]–ETA:0sacc:79.69% Reply JasonBrownlee December5,2016at6:50am # Thereisrandomnessinthelearningprocessthatwecannotcontrolforyet. Seethispost: http://machinelearningmastery.com/randomness-in-machine-learning/ Reply Nanya December10,2016at2:55pm # HelloJasonBrownlee,Thxforsharing~ I’mnewindeeplearning.AndIamwonderingcanwhatyoudicussedhere:”Keras”beusedtobuildaCNNintensorflowandtrainsomecsvfielsforclassification.Maybethisisastupidquestion,butwaitingforyoureply.I’mworkingonmygraduationprojectforWordsensedisambiguationwithcnn,andjustcan’tmoveon.Hopeforyourheip~Besewishes! Reply JasonBrownlee December11,2016at5:22am # SorryNanya,I’mnotsureIunderstandyourquestion.Areyouabletorephraseit? Reply Anon December16,2016at12:51am # I’vejustinstalledAnacondawithKerasandamusingpython3.5. Itseemsthere’sanerrorwiththeroundingusingPy3asopposedtoPy2.Ithinkit’sbecauseofthischange:https://github.com/numpy/numpy/issues/5700 Iremovedtheroundingandjustusedprint(predictions)anditseemedtoworkoutputtingfloatsinstead. Doesthislookcorrect? … Epoch150/150 0s–loss:0.4593–acc:0.7839 [[0.79361773] [0.10443526] [0.90862554] …, [0.33652252] [0.63745886] [0.11704451]] Reply JasonBrownlee December16,2016at5:44am # Nice,itdoeslookgood! Reply FlorinClaudiuMihalache December19,2016at2:37am # HiJasonBrownlee Itriedtomodifiedyourexempleformyproblem(LetterRecognition,http://archive.ics.uci.edu/ml/datasets/Letter+Recognition). Mydatasetlooklikehttp://archive.ics.uci.edu/ml/machine-learning-databases/letter-recognition/letter-recognition.data(T,2,8,3,5,1,8,13,0,6,6,10,8,0,8,0,8).Itrytosplitthedataininputandouputlikethis: X=dataset[:,1:17] Y=dataset[:,0] butahavesomeerror(somethingrelatedthatstringsarenotrecognized). ItriedtomodifiedeachletterwhittheASCIIcode(Abecame65andsoon).Thestringerrordisappeared. Theprogramcompilesnowbuttheoutputlooklikethis: 17445/20000[=========================>….]–ETA:0s–loss:-1219.4768–acc:0.0000e+00 17605/20000[=========================>….]–ETA:0s–loss:-1219.4706–acc:0.0000e+00 17730/20000[=========================>….]–ETA:0s–loss:-1219.4566–acc:0.0000e+00 17890/20000[=========================>….]–ETA:0s–loss:-1219.4071–acc:0.0000e+00 18050/20000[==========================>…]–ETA:0s–loss:-1219.4599–acc:0.0000e+00 18175/20000[==========================>…]–ETA:0s–loss:-1219.3972–acc:0.0000e+00 18335/20000[==========================>…]–ETA:0s–loss:-1219.4642–acc:0.0000e+00 18495/20000[==========================>…]–ETA:0s–loss:-1219.5032–acc:0.0000e+00 18620/20000[==========================>…]–ETA:0s–loss:-1219.4391–acc:0.0000e+00 18780/20000[===========================>..]–ETA:0s–loss:-1219.5652–acc:0.0000e+00 18940/20000[===========================>..]–ETA:0s–loss:-1219.5520–acc:0.0000e+00 19080/20000[===========================>..]–ETA:0s–loss:-1219.5381–acc:0.0000e+00 19225/20000[===========================>..]–ETA:0s–loss:-1219.5182–acc:0.0000e+00 19385/20000[============================>.]–ETA:0s–loss:-1219.6742–acc:0.0000e+00 19535/20000[============================>.]–ETA:0s–loss:-1219.7030–acc:0.0000e+00 19670/20000[============================>.]–ETA:0s–loss:-1219.7634–acc:0.0000e+00 19830/20000[============================>.]–ETA:0s–loss:-1219.8336–acc:0.0000e+00 19990/20000[============================>.]–ETA:0s–loss:-1219.8532–acc:0.0000e+00 20000/20000[==============================]–1s–loss:-1219.8594–acc:0.0000e+00 18880/20000[===========================>..]–ETA:0sacc:0.00% Idonotunderstandwhy.Canyoupleasehelpme Reply Anon December26,2016at6:44am # WhatversionofPythonareyourunning? Reply karishmasharma December22,2016at10:03am # HiJason, Sincetheepochissetto150andbatchsizeis10,doesthetrainingalgorithmpick10trainingexamplesatrandomineachiteration,giventhatwehadonly768totalinX.Ordoesitsamplerandomlyafterithasfinishedcoveringall. Thanks Reply JasonBrownlee December23,2016at5:27am # Goodquestion, Ititeratesoverthedataset150timesandwithinoneepochitworksthrough10rowsatatimebeforedoinganupdatetotheweights.Thepatternsareshuffledbeforeeachepoch. Ihopethathelps. Reply Kaustuv January9,2017at4:57am # HiJason Thanksalotforthisblog.Itreallyhelpsmetostartlearningdeeplearningwhichwasinaplanningstateforlastfewmonths.Yoursimpleenrichblogsareawsome.Noquestionsfrommysidebeforecompletingalltutorials. Onequestionregardingavailabilityofyourbook.HowcanIbuythosebooksfromIndia? Reply JasonBrownlee January9,2017at7:53am # Allmybooksandtrainingaredigital,youcanpurchasethemfromhere: http://machinelearningmastery.com/products Reply StephenWilson January15,2017at4:00pm # HiJason,firstlyyourworkhereisafantasticresourceandIamverythankfulfortheeffortyouputin. Iamaslightly-better-than-beginneratpythonandanabsolutenoviceatML,Iwonderifyoucouldhelpmeclassifymyproblemandfindanangletoworkatitfrom. Mydataisthus: ColumnNames:1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,Result Values:4,4,6,6,3,2,5,5,0,0,0,0,0,0,0,4 IwanttofindthepercentagechanceofeachColumnNamescategorybeingtheResultbasedofftheconfigurationofallthevaluespresentfrom1-15.ThenifneedbecomparetheconfigurationofValueswithanotherrowofvaluestofindthesame,Resultinginthetotalneededcalculationas: ColumnNames:1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,Result Values:4,4,6,6,3,2,5,5,0,0,0,0,0,0,0,4 Values2:7,3,5,1,4,8,6,2,9,9,9,9,9,9,9 Iapologizeifmyexplanationisnotclear,andappreciateanyhelpyoucangivemethankyou. Reply JasonBrownlee January16,2017at10:39am # HiStephen, Thisprocessmighthelpyouworkthroughyourproblem: http://machinelearningmastery.com/start-here/#process Specificallythefirststepindefiningyourproblem. Letmeknowhowyougo. Reply Rohit January16,2017at10:37pm # ThanksJasonforsuchaniceandconciseexample. JustwantedtoaskifitispossibletosavethismodelinafileandportittomaybeanAndroidoriOSdevice?Ifso,whatarethelibrariesavailableforthesame? Thanks Rohit Reply JasonBrownlee January17,2017at7:38am # ThanksRohit, Here’sanexampleofsavingaKerasmodeltofile: http://machinelearningmastery.com/save-load-keras-deep-learning-models/ Idon’tknowaboutrunningKerasonanAndroidoriOSdevice.Letmeknowhowyougo. Reply zaheerkhan June16,2017at7:17pm # DearJason,Thanksforsharingthisarticle. Iamnovicetothedeeplearning,andmyapologyifmyquestionisnotclear.myquestioniscouldwecallallthatfunctionsandprogramfromany.php,.aspx,or.htmlwebpage.imeaniloadthevariablesandotherfilesselectionfromuserinterfaceandthenmaketheminputtothisfunctions. willbewaitingforyourkindreply. thanksinadvance. zaheer Reply JasonBrownlee June17,2017at7:25am # Perhaps,thissoundslikeasystemsdesignquestion,notreallymachinelearning. Iwouldsuggestyougatherrequirements,assessriskslikeanysoftwareengineeringproject. Reply Hsiang January18,2017at3:35pm # Hi,Jason Thankyouforyourblog!Itiswonderful! Iusedtensorflowasbackend,andimplementedtheproceduresusingJupyter. Idid“sourceactivatetensorflow”->“ipythonnotebook”. IcansuccessfullyuseKerasandimporttensorflow. However,itseemsthatsuchenvironmentdoesn’tsupportpandasandsklearn. Doyouhaveanywaytoincorporatepandas,sklearnandkeras? (Iwishtousesklearntorevisittheclassificationproblemandcomparetheaccuracywiththedeeplearningmethod.ButIalsowishtoputtheworkstogetherinthesameinterface.) Thanks! Reply JasonBrownlee January19,2017at7:24am # Sorry,Idonotusenotebooksmyself.Icannotofferyougoodadvice. Reply Hsiang January19,2017at12:53pm # Thanks,Jason! Actuallytheproblemisnotonnotebooks.EvenIusedtheterminalmode,i.e.doing“sourceactivatetensorflow”only.Itfailedtoimportsklearn.Doesthatmeantensorflowlibraryisnotcompatiblewithsklearn?Thanksagain! Reply JasonBrownlee January20,2017at10:17am # SorryHsiang,Idon’thaveexperienceusingsklearnandtensorflowwithvirtualenvironments. Reply Hsiang January21,2017at12:46am # Thankyou! JasonBrownlee January21,2017at10:34am # You’rewelcomeHsiang. keshavbansal January24,2017at12:45am # hellosir, Averyinformativepostindeed.Iknowmyquestionisaverytrivialonebutcanyoupleaseshowmehowtopredictonaexplicitlymentioneddatatuplesayv=[6,148,72,35,0,33.6,0.627,50] thanksforthetutorialanyway Reply JasonBrownlee January24,2017at11:04am # Hikeshav, Youcanmakepredictionsbycallingmodel.predict() Reply CATRINAWEBB January25,2017at9:06am # WhenIrerunthefile(withoutpredictions)doesitresetthemodelandweights? Reply Ericson January30,2017at8:04pm # excusemesir,iwannaaskyouaquestionaboutthisparagraph”dataset=numpy.loadtxt(“pima-indians-diabetes.csv”,delimiter=’,’)”,iusedthemacanddownloadedthedataset,theniexchangedthetextintocsvfile.Runningtheprogram ,henigot:{Python2.7.13(v2.7.13:a06454b1afa1,Dec172016,12:39:47) [GCC4.2.1(AppleInc.build5666)(dot3)]ondarwin Type“copyright”,“credits”or“license()”formoreinformation. >>> ============RESTART:/Users/luowenbin/Documents/database_test.py============ UsingTensorFlowbackend. Traceback(mostrecentcalllast): File“/Users/luowenbin/Documents/database_test.py”,line9,in dataset=numpy.loadtxt(“pima-indians-diabetes.csv”,delimiter=’,’) File“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/npyio.py”,line985,inloadtxt items=[conv(val)for(conv,val)inzip(converters,vals)] File“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/npyio.py”,line687,infloatconv returnfloat(x) ValueError:couldnotconvertstringtofloat:book >>>} Howcanisolvethisproblem?givemeahandthankyou! Reply JasonBrownlee February1,2017at10:22am # HiEricson, Confirmthatthecontentsof“pima-indians-diabetes.csv”meetyourexpectationofalistofCSVlines. Reply Sukhpal February7,2017at9:00pm # excusemesir,whenirunthiscodeformydataset,Iencounterthisproblem…pleasehelpmefindingsolutiontothisproblem runfile(‘C:/Users/sukhpal/.spyder/temp.py’,wdir=’C:/Users/sukhpal/.spyder’) UsingTensorFlowbackend. Traceback(mostrecentcalllast): File“”,line1,in runfile(‘C:/Users/sukhpal/.spyder/temp.py’,wdir=’C:/Users/sukhpal/.spyder’) File“C:\Users\sukhpal\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py”,line866,inrunfile execfile(filename,namespace) File“C:\Users\sukhpal\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py”,line87,inexecfile exec(compile(scripttext,filename,‘exec’),glob,loc) File“C:/Users/sukhpal/.spyder/temp.py”,line1,in fromkeras.modelsimportSequential File“C:\Users\sukhpal\Anaconda2\lib\site-packages\keras\__init__.py”,line2,in from.importbackend File“C:\Users\sukhpal\Anaconda2\lib\site-packages\keras\backend\__init__.py”,line67,in from.tensorflow_backendimport* File“C:\Users\sukhpal\Anaconda2\lib\site-packages\keras\backend\tensorflow_backend.py”,line1,in importtensorflowastf ImportError:Nomodulenamedtensorflow Reply JasonBrownlee February8,2017at9:34am # Thisisachangewiththemostrecentversionoftensorflow,Iwillinvestigateandchangetheexample. Fornow,considerinstallingandusinganolderversionoftensorflow. Reply Will February14,2017at5:33am # Greattutorial!Amazingamountofworkyou’veputinandgreatmarketingskills(Ialsohaveanemaillist,ebooksandsequence,etc).IranthisinJupyternotebook…Inoticedthe144thepoch(acc.7982)hadmoreaccuracythanat150.Whyisthat? P.S.ididthisfortheprint:print(numpy.round(predictions)) Itseemstoavoidalistofarrayswhichwhenprintingincludesthedtype(messy) Reply JasonBrownlee February14,2017at10:07am # ThanksWill. Themodelwillfluctuateinperformancewhilelearning.Youcanconfiguretriggeredcheckpointstosavethemodelif/whenconditionslikeadecreaseintrain/validationperformanceisdetected.Here’sanexample: http://machinelearningmastery.com/check-point-deep-learning-models-keras/ Reply Sukhpal February14,2017at3:50pm # Pleasehelpmetofindoutthiserror runfile(‘C:/Users/sukhpal/.spyder/temp.py’,wdir=’C:/Users/sukhpal/.spyder’)ERROR:executionaborted Reply JasonBrownlee February15,2017at11:32am # I’mnotsureSukhpal. Considergettingcodeworkingfromthecommandline,Idon’tuseIDEsmyself. Reply Kamal February14,2017at5:15pm # pleasehelpmetofindthiserrorfindthiserror Epoch194/195 195/195[==============================]–0s–loss:0.2692–acc:0.8667 Epoch195/195 195/195[==============================]–0s–loss:0.2586–acc:0.8667 195/195[==============================]–0s Traceback(mostrecentcalllast): Reply JasonBrownlee February15,2017at11:32am # WhatwastheerrorexactlyKamal? Reply Kamal February15,2017at3:24pm # sirwhenirunthecodeonmydataset thenitdoesnotshowoverallaccuracyalthoughitshowstheaccuracyandlossforthewholeiterations Reply JasonBrownlee February16,2017at11:06am # I’mnotsureIunderstandyourquestionKamal,pleaseyoucouldrestateit? Reply Val February15,2017at9:00pm # HiJason,imjuststartingdeeplearninginpythonusingkerasandtheano.Ihavefollowedtheinstallationinstructionswithoutahitch.Testedsomeexamplesbutwhenirunthisonelinebylineigetalotofexceptionsanderrorsonceirunthe“model.fit(X,Y,nb_epochs=150,batch_size=10” Reply JasonBrownlee February16,2017at11:06am # Whaterrorsareyougetting? Reply CrisH February17,2017at8:12pm # Hi,howdoIknowwhatnumbertouseforrandom.seed()?Imeanyouuse7,isthereanyreasonforthat?Alsoisitenoughtouseitonlyonce,inthebeginningofthecode? Reply JasonBrownlee February18,2017at8:38am # YoucanuseanynumberCrisH.Thefixedrandomseedmakestheexamplereproducible. Youcanlearnmoreaboutrandomnessandrandomseedsinthispost: http://machinelearningmastery.com/randomness-in-machine-learning/ Reply kk February18,2017at1:53am # amnewtodeeplearningandfoundthisgreattutorial.keepitupandlookforward!! Reply JasonBrownlee February18,2017at8:41am # Thanks! Reply IqraAmeer February21,2017at5:20am # HI,Ihaveaprobleminexecutiontheaboveexampleasit.Itseemsthatit’snotrunningproperlyandstopsatUsingTensorFlowbackend. Epoch147/150 768/768[==============================]–0s–loss:0.4709–acc:0.7878 Epoch148/150 768/768[==============================]–0s–loss:0.4690–acc:0.7812 Epoch149/150 768/768[==============================]–0s–loss:0.4711–acc:0.7721 Epoch150/150 768/768[==============================]–0s–loss:0.4731–acc:0.7747 32/768[>………………………..]–ETA:0sacc:76.43% Iamnewinthisfield,couldyoupleaseguidemeaboutthiserror. Ialsoexecutedonanotherdataset,itstopswiththesamebehavior. Reply JasonBrownlee February21,2017at9:39am # Whatistheerrorexactly?Theexamplehangs? MaybetrytheTheanobackendandseeifthatmakesadifference.Alsomakesureallofyourlibrariesareuptodate. Reply IqraAmeer February22,2017at5:47am # DearJason, Thankyousomuchforyourvaluablesuggestions.ItriedTheanobackendandalsoupdatedallmylibraries,butagainithangedat: 768/768[==============================]–0s–loss:0.4656–acc:0.7799 Epoch149/150 768/768[==============================]–0s–loss:0.4589–acc:0.7826 Epoch150/150 768/768[==============================]–0s–loss:0.4611–acc:0.7773 32/768[>………………………..]–ETA:0sacc:78.91% Reply JasonBrownlee February22,2017at10:05am # I’msorrytohearthat,Ihavenotseenthisissuebefore. PerhapsaRAMissueoraCPUoverheatingissue?Areyouabletotrydifferenthardware? Reply frd March8,2017at2:50am # Hi! Wereyouabletofindasolutionforthat? I’mhavingexactlythesameproblem (…) Epoch149/150 768/768[==============================]–0s–loss:0.4593–acc:0.7773 Epoch150/150 768/768[==============================]–0s–loss:0.4586–acc:0.7891 32/768[>………………………..]–ETA:0sacc:76.69% Reply Bhanu February23,2017at1:51pm # Hellosir, iwanttoaskwetherwecanconvertthiscodetodeeplearningwidincreasingnumberoflayers.. Reply JasonBrownlee February24,2017at10:12am # Sureyoucanincreasethenumberoflayers,tryitandsee. Reply AnanyaMohapatra February28,2017at6:40pm # hellosir, couldyoupleasetellmehowdoideterminetheno.ofneuronsineachlayer,becauseiamusingadifferentdatsetandamunabletoknowtheno.ofneuronsineachlayer Reply JasonBrownlee March1,2017at8:33am # HiAnanya,greatquestion. Sorry,thereisnogoodtheoryonhowtoconfigureaneuralnet. Youcanconfigurethenumberofneuronsinalayerbytrialanderror.Alsoconsidertuningthenumberofepochsandbatchsizeatthesametime. Reply AnanyaMohapatra March1,2017at4:42pm # thankyousomuchsir.Itworked!🙂 Reply JasonBrownlee March2,2017at8:11am # GladtohereitAnanya. Reply JayantSahewal February28,2017at8:11pm # HiJason, reallyhelpfulblog.Ihaveaquestionabouthowmuchtimedoesittaketoconverge? Ihaveadatasetwitharound4000records,3inputcolumnsand1outputcolumn.Icameupwiththefollowingmodel defcreate_model(dropout_rate=0.0,weight_constraint=0,learning_rate=0.001,activation=’linear’): #createmodel model=Sequential() model.add(Dense(6,input_dim=3,init=’uniform’,activation=activation,W_constraint=maxnorm(weight_constraint))) model.add(Dropout(dropout_rate)) model.add(Dense(1,init=’uniform’,activation=’sigmoid’)) #Optimizer optimizer=Adam(lr=learning_rate) #Compilemodel model.compile(loss=’binary_crossentropy’,optimizer=optimizer,metrics=[‘accuracy’]) returnmodel #createmodel model=KerasRegressor(build_fn=create_model,verbose=0) #definethegridsearchparameters batch_size=[10] epochs=[100] weight_constraint=[3] dropout_rate=[0.9] learning_rate=[0.01] activation=[‘linear’] param_grid=dict(batch_size=batch_size,nb_epoch=epochs,dropout_rate=dropout_rate,\ weight_constraint=weight_constraint,learning_rate=learning_rate,activation=activation) grid=GridSearchCV(estimator=model,param_grid=param_grid,n_jobs=-1,cv=5) grid_result=grid.fit(X_train,Y_train) Ihavea32coremachinewith64GBRAManditdoesnotconvergeeveninmorethananhour.Icanseeallthecoresbusy,soitisusingallthecoresfortraining.However,ifIchangetheinputneuronsto3thenitconvergesinaround2minutes. Kerasversion:1.1.1 Tensorflowversion:0.10.0rc0 theanoversion:0.8.2.dev-901275534cbfe3fbbe290ce85d1abf8bb9a5b203 It’susingTensorflowbackend.Canyouhelpmeunderstandwhatisgoingonorpointmeintherightdirection?Doyouthinkswitchingtotheanowillhelp? Best, Jayant Reply JasonBrownlee March1,2017at8:36am # Thispostmighthelpyoutuneyourdeeplearningmodel: http://machinelearningmastery.com/improve-deep-learning-performance/ Ihopethathelpsasastart. Reply AnimeshMohanty March1,2017at9:21pm # hellosir, couldyoupleasetellmehowcaniplottheresultsofthecodeonagraph.Imadeafewadjustmentstothecodesoastorunitonadifferentdataset. Reply JasonBrownlee March2,2017at8:16am # WhatdoyouwanttoplotexactlyAnimesh? Reply AnimeshMohanty March2,2017at4:56pm # Accuracyvsno.ofneuronsintheinputlayerandtheno.ofneuronsinthehiddenlayer Reply param March2,2017at12:15am # sircanuplzexplain thedifferentattributesusedinthisstatement print(“%s:%.2f%%”%(model.metrics_names[1],scores[1]*100)) Reply param March2,2017at12:16am # precisely,whatismodel.metrics_names Reply JasonBrownlee March2,2017at8:22am # model.metrics_namesisalistofnamesofthemetricscollectedduringtraining. Moredetailshere: https://keras.io/models/sequential/ Reply JasonBrownlee March2,2017at8:20am # Hiparam, Itisusingstringformatting.%sformatsastring,%.2fformatsafloatingpointvaluewith2decimalplaces,%%includesapercentsymbol. Youcanlearnmoreabouttheprintfunctionhere: https://docs.python.org/3/library/functions.html#print Moreinfoonstringformattinghere: https://pyformat.info/ Reply VijinKP March2,2017at4:01am # HiJason, Itwasanawesomepost.CouldyoupleasetellmehowtowedecidethefollowinginaDNN1.numberofneuronsinthehiddenlayers 2.numberofhiddenlayers Thanks. Vijin Reply JasonBrownlee March2,2017at8:22am # GreatquestionVijin. Generally,trialanderror.Therearenogoodtheoriesonhowtoconfigureaneuralnetwork. Reply VijinKP March3,2017at5:23am # Wedocrossvalidation,gridsearchetctofindthehyperparametersinmachinealgorithms.Similarlycanwedoanythingtoidentifytheaboveparameters?? Reply JasonBrownlee March3,2017at7:46am # Yes,wecanusegridsearchandtuningforneuralnets. Thestochasticnatureofneuralnetsmeansthateachexperiment(setofconfigs)willhavetoberunmanytimes(30?100?)sothatyoucantakethemeanperformance. Moregeneralinfoontuningneuralnetshere: http://machinelearningmastery.com/improve-deep-learning-performance/ Moreonrandomnessandstochasticalgorithmshere: http://machinelearningmastery.com/randomness-in-machine-learning/ Reply Bogdan March2,2017at11:48pm # Jason,Pleasetellmeabouttheselinesinyourcode: seed=7 numpy.random.seed(seed) Whatdotheydo?Andwhydotheydoit? OnemorequestioniswhydoyoucallthelastsectionBonus:Makeaprediction? IthoughtthiswhatANNwascreatedfor.Whatthepointifyournetwork’soutputisjustwhatyouhavealreadyknow? Reply JasonBrownlee March3,2017at7:44am # Theyseedtherandomnumbergeneratorsothatitproducesthesamesequenceofrandomnumberseachtimethecodeisrun.Thisistoensureyougetthesameresultasme. I’mnotconvinceditworkswithKerasthough. Moreonrandomnessinmachinelearninghere: http://machinelearningmastery.com/randomness-in-machine-learning/ Iwasshowinghowtobuildandevaluatethemodelinthistutorial.Thepartaboutstandalonepredictionwasanadd-on. Reply Sounaksahoo March3,2017at7:39pm # whatexactlyistheworkof“seed”intheneuralnetworkcode?whatdoesitdo? Reply JasonBrownlee March6,2017at10:44am # Seedreferstoseedingtherandomnumbergeneratorsothatthesamesequenceofrandomnumbersisgeneratedeachtimetheexampleisrun. Theaimistomaketheexamples100%reproducible,butthisishardwithsymbolicmathlibslikeTheanoandTensorFlowbackends. Formoreonrandomnessinmachinelearning,seethispost: http://machinelearningmastery.com/randomness-in-machine-learning/ Reply PriyaSundari March3,2017at10:19pm # hellosir couldyouplztellmewhatistheroleofoptimizerandbinary_crossentropyexactly?itiswrittenthatoptimizerisusedtosearchthroughtheweightsofthenetworkwhichweightsarewetalkingaboutexactly? Reply JasonBrownlee March6,2017at10:48am # HiPriya, Youcanlearnmoreaboutthefundamentalsofneuralnetshere: http://machinelearningmastery.com/neural-networks-crash-course/ Reply Bogdan March3,2017at10:23pm # IfIamnotmistaken,thoselinesIcommentedaboutusedwhenwewrite init=‘uniform’ ? Reply Bogdan March3,2017at10:44pm # Couldyouexplaininmoredetailswhatisthebatchsize? Reply JasonBrownlee March6,2017at10:50am # HiBogdan, Batchsizeishowmanypatternstoshowtothenetworkbeforetheweightsareupdatedwiththeaccumulatederrors.Thesmallerthebatch,thefasterthelearning,butalsothemorenoisythelearning(highervariance). Tryexploringdifferentbatchsizesandseetheeffectonthetrainandtestperformanceovereachepoch. Reply Mohammad March7,2017at6:50am # DearJason Firstly,thanksforyourgreattutorials. Iamtryingtoclassifycomputernetworkspacketsusingfirst500bytesofeverypackettoidentifyitsprotocol.Iamtryingtouse1dconvolution.forsimplertask,Ijustwanttodobinaryclassificationandthentacklemultilabelclassificationfor10protocols.Hereismycodebuttheaccuracywhichislike.63.howcanIimprovetheperformance?shouldIUseRNNs? ######## model=Sequential() model.add(Convolution1D(64,10,border_mode=’valid’, activation=’relu’,subsample_length=1,input_shape=(500,1))) #model.add(Convolution2D(32,5,5,border_mode=’valid’,input_shape=(1,28,28),)) model.add(MaxPooling1D(2)) model.add(Flatten()) model.add(Dense(200,activation=’relu’)) model.add(Dense(1,activation=’sigmoid’)) model.compile(loss=’binary_crossentropy’, optimizer=’adam’,metrics=[‘accuracy’]) model.fit(train_set,y_train, batch_size=250, nb_epoch=30, show_accuracy=True) #x2=get_activations(model,0,xprim) #score=model.evaluate(t,y_test,show_accuracy=True,verbose=0) #print(score[0]) Reply JasonBrownlee March7,2017at9:37am # Thispostlistssomeideastotryanliftperformance: http://machinelearningmastery.com/improve-deep-learning-performance/ Reply Damiano March7,2017at10:13pm # HiJason,thankyousomuchforthisawesometutorial.Ihavejuststartedwithpythonandmachinelearning. Iamjokingwiththecodedoingfewchanges,forexampleihavechanged.. this: #createmodel model=Sequential() model.add(Dense(250,input_dim=8,init=’uniform’,activation=’relu’)) model.add(Dense(200,init=’uniform’,activation=’relu’)) model.add(Dense(200,init=’uniform’,activation=’relu’)) model.add(Dense(1,init=’uniform’,activation=’sigmoid’)) andthis: model.fit(X,Y,nb_epoch=250,batch_size=10) theniwouldliketopasssomearraysforpredictionso… new_input=numpy.array([[3,88,58,11,54,24.8,267,22],[6,92,92,0,0,19.9,188,28], [10,101,76,48,180,32.9,171,63], [2,122,70,27,0,36.8,0.34,27],[5,121,72,23,112,26.2,245,30]]) predictions=model.predict(new_input) printpredictions#[1.0,1.0,1.0,0.0,1.0] isthiscorrect?Inthisexampleiusedthesameseriesoftraining(thathave0class),butiamgettingwrongresults.Onlyonearrayiscorrectlypredicted. Thankyousomuch! Reply JasonBrownlee March8,2017at9:41am # Looksgood.Perhapsyoucouldtrychangingtheconfigurationofyourmodeltomakeitmoreskillful? Seethispost: http://machinelearningmastery.com/improve-deep-learning-performance/ Reply ANJI March13,2017at8:48pm # hellosir, couldyoupleasetellmetorectifymyerrorbelowitisraisedwhilemodelistraining: str(array.shape)) ValueError:Errorwhencheckingmodelinput:expectedconvolution2d_input_1tohave4dimensions,butgotarraywithshape(68,28,28). Reply JasonBrownlee March14,2017at8:17am # ItlookslikeyouareworkingwithCNN,notrelatedtothistutorial. ConsidertryingthistutorialtogetfamiliarwithCNNs: http://machinelearningmastery.com/handwritten-digit-recognition-using-convolutional-neural-networks-python-keras/ Reply Rimjhim March14,2017at8:21pm # Iwantaneuralthatcanpredictsinvalues.Furtherfromagivendatasetineedtodeterminethefunction(forexampleifthedataisoftanorcos,thenhowtodeterminethatdataisoftanonlyorcosonly) Thanksinadvance Reply Sudarshan March15,2017at11:19pm # KerasjustupdatedtoKeras2.0.Ihaveanupdatedversionofthiscodehere:https://github.com/sudarshan85/keras-projects/tree/master/mlm/pima_indians Reply JasonBrownlee March16,2017at7:59am # Nicework. Reply subhasish March16,2017at5:09pm # hellosir, canweusePSO(particleswarmoptimisation)inthis?ifsocanyoutellhow? Reply JasonBrownlee March17,2017at8:25am # Sorry,Idon’thaveanexampleofPSOforfittingneuralnetworkweights. Reply AnanyaMohapatra March16,2017at10:03pm # hellosir, whattypeofneuralnetworkisusedinthiscode?asthereare3typesofNeuralnetworkthatare…feedforward,radialbasisfunctionandrecurrentneuraknetwork. Reply JasonBrownlee March17,2017at8:28am # Amultilayerperceptron(MLP)neuralnetwork.Aclassictypefromthe1980s. Reply Diego March17,2017at3:58am # gotthiserrorwhilecompiling.. sigmoid_cross_entropy_with_logits()gotanunexpectedkeywordargument‘labels’ Reply JasonBrownlee March17,2017at8:30am # Perhapsconfirmthatyourlibrariesarealluptodate(Keras,TheanoorTensorFlow)? Reply Rohan March20,2017at5:20am # HiJason! Iamtryingtousetwooddframesofavideotopredicttheevenone.ThusIneedtogivetwoimagesasinputtothenetworkandgetoneimageasoutput.Canyouhelpmewiththesyntaxforthefirstmodel.add()?IhaveX_trainofdimension(190,2,240,320,3)where190arethenumberofoddpairs,2arethetwooddimages,and(240,320,3)arethe(height,width,depth)ofeachimage. Reply HerliMenezes March21,2017at8:33am # Hello,Jason, Thanksforyourgoodtutorial.Howeverifoundsomeissues: Warningslikethese: 1–Warning(fromwarningsmodule): File“/usr/lib/python2.7/site-packages/keras/legacy/interfaces.py”,line86 ‘calltotheKeras2API:'+signature) UserWarning:UpdateyourDensecalltotheKeras2API:Dense(12,activation=”relu”,kernel_initializer=”uniform”,input_dim=8) 2-Warning(fromwarningsmodule): File"/usr/lib/python2.7/site-packages/keras/legacy/interfaces.py",line86 'calltotheKeras2API:‘+signature) UserWarning:UpdateyourDensecalltotheKeras2API:Dense(8,activation="relu",kernel_initializer="uniform") 3–Warning(fromwarningsmodule): File“/usr/lib/python2.7/site-packages/keras/legacy/interfaces.py”,line86 ‘calltotheKeras2API:'+signature) UserWarning:UpdateyourDensecalltotheKeras2API:Dense(1,activation=”sigmoid”,kernel_initializer=”uniform”) 3-Warning(fromwarningsmodule): File"/usr/lib/python2.7/site-packages/keras/models.py",line826 warnings.warn('Thenb_epochargumentinfit' UserWarning:Thenb_epochargumentinfithasbeenrenamedepochs`. Ithinktheseareduetosomepackageupdate.. But,theoutputofpredictionswasanarrayofzeros… suchas:[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,….0.0] IamrunninginaLinuxMachine,Fedora24, Python2.7.13(default,Jan122017,17:59:37) [GCC6.3.120161221(RedHat6.3.1-1)]onlinux2 Why? Thankyou! Reply JasonBrownlee March21,2017at8:45am # TheselooklikewarningsrelatedtotherecentKeras2.0release. Theylooklikejustwarningandthatyoucanstillruntheexample. Idonotknowwhyyouaregettingallzeros.Iwillinvestigate. Reply AnanyaMohapatra March21,2017at6:21pm # hellosir, canyoupleasehelpmebuildarecurrentneuralnetworkwiththeabovegivendataset.iamhavingabittroubleinbuildingthelayers… Reply JasonBrownlee March22,2017at7:56am # HiAnanya, ThePimaIndiandiabetesdatasetisabinaryclassificationproblem.ItisnotappropriateforaRecurrentNeuralNetworkasthereisnosequenceinformationtolearn. Reply AnanyaMohapatra March22,2017at8:04pm # sirsocouldyoutellonwhichtypeofdatasetwouldtherecurrentneuralnetworkaccuratelywork?ihavethedatasetofEEGsignalsofepilepticpatients…willrecurrentnetworkworkonthis? Reply JasonBrownlee March23,2017at8:49am # Itmayifitisregularenough. LSTMsareexcellentatsequenceproblemsthathaveregularityorclearsignalstodetect. Reply Shane March22,2017at5:18am # HiJason,IhaveaquickquestionrelatedtoanerrorIamreceivingwhenrunningthecodeinthetutorial… WhenIrun #Compilemodel model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) Pythonreturnsthefollowingerror: sigmoid_cross_entropy_with_logits()gotanunexpectedkeywordargument‘labels’ Reply JasonBrownlee March22,2017at8:09am # Sorry,IhavenotseenthiserrorShane. Perhapscheckthatyourenvironmentisuptodatewiththelatestversionsofthedeeplearninglibraries? Reply Tejes March24,2017at1:04am # HiJason, Thanksforthisawesomepost. Iranyourcodewithtensorflowbackend,justoutofcuriosity.TheaccuracyreturnedwasdifferenteverytimeIranthecode.Thatdidn’thappenwithTheano.Canyoutellmewhy? Thanksinadvance! Reply JasonBrownlee March24,2017at7:56am # Youwillgetdifferentaccuracyeachtimeyourunthecodebecauseneuralnetworksarestochastic. Thisisnotrelatedtothebackend(Iexpect). Moreonrandomnessinmachinelearninghere: http://machinelearningmastery.com/randomness-in-machine-learning/ Reply SaurabhBhagvatula March27,2017at9:49pm # HiJason, I’mnewtodeeplearningandlearningitfromyourtutorials,whichpreviouslyhelpedmeunderstandMachineLearningverywell. Inthefollowingcode,Iwanttoknowwhythenumberofneuronsdifferfrominput_diminfirstlayerofNueralNet. #createmodel model=Sequential() model.add(Dense(12,input_dim=8,init=’uniform’,activation=’relu’)) model.add(Dense(8,init=’uniform’,activation=’relu’)) model.add(Dense(1,init=’uniform’,activation=’sigmoid’)) Reply JasonBrownlee March28,2017at8:22am # Youcanspecifythenumberofinputsvia“input_dim”,youcanspecifythenumberofneuronsinthefirsthiddenlayerasthefirstparametertoDense(). Reply SaurabhBhagvatula March28,2017at4:15pm # Thanxalot. Reply JasonBrownlee March29,2017at9:05am # You’rewelcome. Reply Nalini March29,2017at2:52am # HiJason whilerunningthiscodeforkfoldcrossvalidationitisnotworking.pleasegivethecodeforkfoldcrossvalidationinbinaryclass Reply JasonBrownlee March29,2017at9:10am # Generallyneuralnetsaretooslow/largefork-foldcrossvalidation. Nevertheless,youcanuseasklearnwrapperforakerasmodelanduseitwithanysklearnresamplingmethod: http://machinelearningmastery.com/evaluate-performance-machine-learning-algorithms-python-using-resampling/ Reply trangtruong March29,2017at7:04pm # HiJason,whyiusefunctionevaluatetogetaccuracyscoremymodelwithtestdataset,itreturnresult>1,ican’tunderstand. Reply enixon April3,2017at3:08am # HeyJason,thanksforthisgreatarticle!Igetthefollowingerrorwhenrunningthecodeabove: TypeError:Receivedunknownkeywordarguments:{‘epochs’:150} Anyideasonwhythatmightbe?Ican’tget‘epochs’,nb_epochs,etctowork… Reply JasonBrownlee April4,2017at9:07am # YouneedtoupdatetoKerasversion2.0orhigher. Reply AnanyaMohapatra April5,2017at9:30pm # defbaseline_model(): #createmodel model=Sequential() model.add(Dense(10,input_dim=25,init=’normal’,activation=’softplus’)) model.add(Dense(3,init=’normal’,activation=’softmax’)) #Compilemodel model.compile(loss=’mean_squared_error’,optimizer=’adam’,metrics=[‘accuracy’]) returnmodel sirheremean_square_errorhasbeenusedforlosscalculation.IsitthesameasLMSalgorithm.Ifnot,canweuseLMS,NLMSorRLStocalculatetheloss? Reply AhmadHijazi April5,2017at10:19pm # HelloJason,thankyoualotforthisexample. Myquestionis,afterItrainedthemodelandanaccuracyof79.2%forexampleisobtainedsuccessfully,howcanItestthismodelonnewdata? forexampleifanewpatientwithnewrecordsappear,Iwanttoguesstheresult(0or1)forhim,howcanIdothatinthecode? Reply JasonBrownlee April9,2017at2:36pm # Youcanfityourmodelonallavailabletrainingdatathenmakepredictionsonnewdataasfollows: yhat=model.predict(X) 1 yhat=model.predict(X) Reply PerickFlaus April6,2017at12:16am # ThanksJason,howcanwetestifnewpatientwillbediabeticorno(0or1)? Reply JasonBrownlee April9,2017at2:36pm # Fitthemodelonalltrainingdataandcall: yhat=model.predict(X) 1 yhat=model.predict(X) Reply Gangadhar April12,2017at1:28am # DrJason, Incompilingthemodeligotbelowerror TypeError:compile()gotanunexpectedkeywordargument‘metrics’ unabletoresolvethebelowerror Reply JasonBrownlee April12,2017at7:53am # EnsureyouhavethelatestversionofKeras,v2.0orhigher. Reply OmogbehinAzeez April13,2017at1:48am # Hellosir, Thankyouforthepost.Aquickquestion,mydatasethas24inputand1binaryoutput(170instances,100epoch,hiddenlayer=6and10batch,kernel_initializer=’normal’).IadaptedyourcodeusingTensorflowandkeras.Iamhavinganaccuracyof98to100percent.Iamscaredofover-fittinginmymodel.Ineedyourcandidadvice.Kindregardssir Reply JasonBrownlee April13,2017at10:07am # Yes,evaluateyourmodelusingk-foldcross-validationtoensureyouarenottrickingyourself. Reply OmogbehinAzeez April14,2017at1:08am # Thankyousir Reply SethuBaktha April13,2017at5:19am # HiJason, IfIwanttousethediabetesdataset(NOTPima)https://archive.ics.uci.edu/ml/datasets/DiabetestopredictBloodGlucosewhichtutorialsande-booksofyourswouldIneedtostartwith….Also,thedatainitscurrentformatwithtime,codeandvalueisitusableasisordoIneedtoconvertthedatainanotherformattobeabletouseit. Thanksforyourhelp Reply JasonBrownlee April13,2017at10:13am # Thisprocesswillhelpyouframeandworkthroughyourdataset: http://machinelearningmastery.com/start-here/#process Ihopethathelpsasastart. Reply SethuBaktha April13,2017at10:25am # Dr.Jason, Thedataistimeseries(timebaseddata)withcategorical(20)withtwonumbersoneforinsulinlevelandanotherforbloodsugarlevel…Eachtimeseriesdatadoesnothaveeverycategoricaldata…Forexampleonecategoryisbloodsugarbeforebreakfast,anothercategoryisbloodsugarafterbreakfast,beforelunchandafterlunch…Sometimessomeofthesecategorydataismissing…Ireadthroughtheabovelink,butdoesnottalkabouttimeseries,categoricaldatawithsomecategoryofdatamissingwhattodointhosecases….Pleaseletmeknowifanyofyourbookswillhelpclarifythesepoints? Reply JasonBrownlee April14,2017at8:43am # HiSethu, Ihavemanypostsontimeseriesthatwillhelp.Getstartedhere: http://machinelearningmastery.com/start-here/#timeseries Withcategoricaldata,Iwouldrecommendanintegerencodingperhapsfollowedbyaone-hotencoding.Youcanlearnmoreabouttheseencodingshere: http://machinelearningmastery.com/data-preparation-gradient-boosting-xgboost-python/ Ihopethathelps. Reply OmogbehinAzeez April14,2017at9:49am # Hellosir, IsitcompulsorytonormalizethedatabeforeusingANNmodel.IreaditsomewhereIwhichtheauthorinsistedthateachattributebecomparableonthescaleof[0,1]forameaningfulmodel.Whatisyourtakeonthatsir.Kindregards. Reply JasonBrownlee April15,2017at9:29am # Yes.Youmustscaleyourdatatotheboundsoftheactivationused. Reply shiva April14,2017at10:38am # HiJason,Youaresimplyawesome.I’moneofthemanywhogotbenefitedfromyourbook“machinelearningmasterywithpython”.I’mworkingwithamedicalimageclassificationproblem.Ihavetwoclassesofmedicalimages(eachclasshaving1000imagesof32*32)tobeworkeduponbytheconvolutionalneuralnetworks.Couldyouguidemehowtoloadthisdatatothekerasdataset?Orhowtousemydatawhilefollowingyoursimplesteps?kindlyhelp. Reply JasonBrownlee April15,2017at9:30am # LoadthedataasnumpyarraysandthenyoucanuseitwithKeras. Reply OmogbehinAzeez April18,2017at12:09am # Hellosir, IadaptedyourcodewiththecrossvalidationpipelinedwithANN(Keras)formymodel.Itgaveme100%still.IgotthedatafromUCI(ChronicKidneyDisease).Itwas400instances,24inputattributesand1binaryattribute.WhenIremovedtherowswithmissingdataIwasleftwith170instances.Ismydatasettoosmallfor(24inputlayer,24hiddenlayerand1outputlayerANN,usingadamandkernelinitializerasuniform)? Reply JasonBrownlee April18,2017at8:32am # Itisnottoosmall. Generally,thesizeofthetrainingdatasetreallydependsonhowyouintendtousethemodel. Reply OmogbehinAzeez April18,2017at11:10pm # Thankyousirfortheresponse,IguessIhavetocontendwiththeover-fittingofmymodel. Reply PadmanabhanKrishnamurthy April19,2017at6:26pm # HiJason, Greattutorial.Lovethesite🙂 Justaquickquery:whyhaveyouusedadamasanoptimizeroversgd?Moreover,whendoweusesgdoptimization,andwhatexactlydoesitinvolve? Thanks Reply JasonBrownlee April20,2017at9:23am # ADAMseemstoconsistentlyworkwellwithlittleornocustomization. SGDrequiresconfigurationofatleastthelearningrateandmomentum. Tryafewmethodsandusetheonethatworksbestforyourproblem. Reply PadmanabhanKrishnamurthy April20,2017at4:32pm # Thanks🙂 Reply OmogbehinAzeez April25,2017at8:13am # Hellosir, Gooddaysir,howcanIgetalltheweightsandbiasesofthekerasANN.Kindregards. Reply JasonBrownlee April26,2017at6:19am # Youcansavethenetworkweights,seethispost: http://machinelearningmastery.com/save-load-keras-deep-learning-models/ YoucanalsousetheAPItoaccesstheweightsdirectly. Reply Shiva April27,2017at5:43am # HiJason, IamcurrentlyworkingwiththeIMDBsentimentanalysisproblemasmentionedinyourbook.AmusingAnaconda3withPython3.5.2.Inanattempttosummarizethereviewlengthasyouhavementionedinyourbook,Whenitrytoexecutethecommand: result=map(len,X) print(“Mean%.2fwords(%f)”%(numpy.mean(result),numpy.std(result))) itreturnstheerror:unsupportedoperandtype(s)for/:‘map’and‘int’ kindlyhelpwiththemodifiedsyntax.lookingforward… Reply JasonBrownlee April27,2017at8:47am # I’msorrytohearthat.Perhapscommentoutthatline? Orchangeittoremovetheformattingandjustprinttherawmeanandstdevvaluesforyoutoreview? Reply Elikplim May1,2017at1:58am # Hello,quitenewtoPython,NumpyandKeras(backgroundinPHP,MYSQLetc).Ifthereare8inputvariablesand1outputvarable(9total),andtheArrayindexingstartsfromzero(fromwhatI’vegatheredit’saNumpyArray,whichisbuiltonPythonlists)andtheorderis[rows,columns],thenshouldn’tourinputvariable(X)beX=dataset[:,0:7](whereweselectfromthe1stto8thcolumns,ie.0thto7thindices)andoutputvariable(Y)beY=dataset[:,8](wherewethe9thcolumn,ie.8thindex)? Reply JasonBrownlee May1,2017at5:59am # Youcanlearnmoreaboutarrayindexinginnumpyhere: https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html Reply JackieLee May1,2017at12:47pm # I’mhavingtroubleswiththepredictionspart.ItsavesValueError:Errorwhencheckingmodelinput:expecteddense_1_inputtohaveshape(None,502)butgotarraywithshape(170464,502) ###MAKEPREDICTIONS### testset=numpy.loadtxt(“right_stim_FD1.csv”,delimiter=”,”) A=testset[:,0:502] B=testset[:,502] probabilities=model.predict(A,batch_size=10,verbose=1) predictions=float(round(a)forainprobabilities) accuracy=numpy.mean(predictions==B) #roundpredictions #rounded=[round(x[0])forxinpredictions] print(predictions) print(“PredictionAccuracy:%.2f%%”%(accuracy*100)) Reply JasonBrownlee May2,2017at5:55am # Itlookslikeyoumightbegivingtheentiredatasetastheoutput(y)ratherthanjusttheoutputvariable. Reply AnastasiosSelalmazidis May2,2017at12:27am # Hithere, Ihaveaquestionregardingdeeplearning.InthistutorialwebuildaMLPwithKeras.IsthisDeepLearningorisitjustaMLPBackpropagation? Reply JasonBrownlee May2,2017at5:59am # DeeplearningisMLPbackpropthesedays: http://machinelearningmastery.com/what-is-deep-learning/ Generally,deeplearningreferstoMLPswithlotsoflayers. Reply EricT May2,2017at8:59pm # Hi, WouldyoumindifIusethiscodeasanexampleofasimplenetworkinaschoolprojectofmine? Needtoaskbeforeusingit,sinceIcannotfindanywhereinthistutorialthatyouareOKwithanyoneusingthecode,andtheethicsmomentofmycourserequiresmetoask(andofcoursegivecreditwherecreditisdue). Kindregards EricT Reply JasonBrownlee May3,2017at7:35am # Yesit’sfinebutItakenoresponsibilityandyoumustcreditthesource. IanswerthisquestioninmyFAQ: http://machinelearningmastery.com/start-here/#faq Reply BinhLN May7,2017at3:11am # HiJason Ihaveaproblem MyDatasethave500record.ButMyteacherwantmydatasethave100.000record.Imusthaveanewalgorithmfordatageneration.Pleasehelpme Reply Dp May11,2017at2:26am # Canyougiveadeepcnncodewhichincludes25layers,inthefirstconvlayerthefiltersizsshouldbe39×39wothatotallf64filters,inthe2ndconvlayer,21×21with32filters,inthe3rdconvlayer11×11with64filters,4thConvlayer7×7with32layers.Forainputsizeofimage256×256.ImCompetelynewinthisDeeplearningThingbutifyoucancodethatformeitwouldbeagreathelp.Thanks Reply JasonBrownlee May11,2017at8:33am # Considerusinganoff-the-shelfmodellikeVGG: https://keras.io/applications/ Reply Maple May13,2017at12:58pm # Ihavetofollowwiththefacebookmetrics.Buttheresultisverylow.Helpme. Ichangedtheinputbutdidnotimprove http://archive.ics.uci.edu/ml/datasets/Facebook+metrics Reply JasonBrownlee May14,2017at7:24am # Ihavealistofsuggestionsthatmayhelpasastart: http://machinelearningmastery.com/improve-deep-learning-performance/ Reply Alessandro May14,2017at1:01am # HiJason, GreatTutorialandthanksforyoureffort. Ihaveaquestion,sinceIambeginnerwithkerasandtensorflow. Ihaveinstalledbothofthem,kerasandtensorflow,thelatestversionandIhaverunyourexamplebutIgetalwaysthesameerror: Traceback(mostrecentcalllast): File“CNN.py”,line18,in model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’]) File“/Users/MacBookPro1/.virtualenvs/keras_tf/lib/python2.7/site-packages/keras/models.py”,line777,incompile **kwargs) File“/Users/MacBookPro1/.virtualenvs/keras_tf/lib/python2.7/site-packages/keras/engine/training.py”,line910,incompile sample_weight,mask) File“/Users/MacBookPro1/.virtualenvs/keras_tf/lib/python2.7/site-packages/keras/engine/training.py”,line436,inweighted score_array=fn(y_true,y_pred) File“/Users/MacBookPro1/.virtualenvs/keras_tf/lib/python2.7/site-packages/keras/losses.py”,line51,inbinary_crossentropy returnK.mean(K.binary_crossentropy(y_pred,y_true),axis=-1) File“/Users/MacBookPro1/.virtualenvs/keras_tf/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py”,line2771,inbinary_crossentropy logits=output) TypeError:sigmoid_cross_entropy_with_logits()gotanunexpectedkeywordargument‘labels’ Couldyouhelp?Thanks Alessandro Reply JasonBrownlee May14,2017at7:30am # Ouch,Ihavenotseenthiserrorbefore. Someideas: –Considertryingthetheanobackendandseeifthatmakesadifference. –Trysearching/postingonthekerasusergroupandslackchannel. –Trysearching/postingonstackoverfloworcrossvalidated. Letmeknowhowyougo. Reply Alessandro May14,2017at9:44am # HiJason, Ifoundtheissue.Thetensorflowinstallationwasoutdated;soIhaveupdateditandeverything isworkingnicely. Goodnight, Alessandro Reply JasonBrownlee May15,2017at5:50am # I’mgladtohearitAlessandro. Reply SheikhRafiulIslam May25,2017at3:36pm # ThankyouMr.Brownleeforyourwonderfuleasytounderstandexplanation Reply JasonBrownlee June2,2017at11:41am # Thnaks. Reply WAZED May29,2017at12:31am # HiJason, Thankyouverymuchforyourwonderfultutorial.Ihaveaquestionregardingthemetrices.Istheredefaultwaytodeclaremetrices“Precision”and“Recall”inaddtionwiththe“Accurace”. Br WAZED Reply JasonBrownlee June2,2017at12:15pm # Yes,seehere: https://keras.io/metrics/ Reply chiranjibkonwar May29,2017at4:30am # HiJason, pleasesendmeasmallnotecontainingresourcesfromwhereicanlearndeeplearningfromscratch.thanksforthewonderfulreadyouhadprepared. Thanksinadvance yes,myemailidis[email protected] Reply JasonBrownlee June2,2017at12:16pm # Here: http://machinelearningmastery.com/start-here/#deeplearning Reply Jeff June1,2017at11:48am # WhytheNNhavemistakesmanytimes? Reply JasonBrownlee June2,2017at12:54pm # Whatdoyoumeanexactly? Reply kevin June2,2017at5:53pm # HiJason, Iseemtobegettinganerrorwhenapplyingthefitmethod: ValueError:Errorwhencheckinginput:expecteddense_1_inputtohaveshape(None,12)butgotarraywithshape(767,8) Ilookedthisupandthemostprominentsuggestionseemedtobeupgradekerasandtheno,whichIdid,butthatdidn’tresolvetheproblem. Reply JasonBrownlee June3,2017at7:24am # Ensureyouhavecopiedthecodeexactlyfromthepost. Reply HemanthKumarK June3,2017at2:15pm # hiJason, Iamstuckwithanerror TypeError:sigmoid_cross_entropy_with_logits()gotanunexpectedkeywordargument‘labels’ mytensorflowandkerasvirsionsare keras:2.0.4 Tensorflow:0.12 Reply JasonBrownlee June4,2017at7:46am # I’msorrytohearthat,Ihavenotseenthaterrorbefore.Perhapsyoucouldpostaquestiontostackoverfloworthekerasusergroup? Reply xena June4,2017at6:36pm # cananyonetellmewhichneuralnetworkisbeingusedhere?IsitMLP?? Reply JasonBrownlee June5,2017at7:40am # Yes,itisamultilayerperceptron(MLP)feedforwardneuralnetwork. Reply NirmeshShah June9,2017at11:00pm # HiJason, IhaverunthiscodesuccessfullyonPCwithCPU. IfIhavetorunthesamecodenanotherPCwhichcontainsGPU,WhatlineshouldIaddtomakeitsurethatitrunsontheGPU Reply JasonBrownlee June10,2017at8:24am # Thecodewouldstaythesame,yourconfigurationoftheKerasbackendwouldchange. PleaserefertoTensorFloworTheanodocumentation. Reply Prachi June12,2017at7:30pm # WhatifIwanttotrainmyneuralwhichshoulddetectwhethertheluggageisabandonedornot?Howdoiproceedforit? Reply JasonBrownlee June13,2017at8:18am # Thisprocesswillhelpyouworkthroughyourpredictivemodelingproblemendtoend: http://machinelearningmastery.com/start-here/#process Reply Ebtesam June14,2017at11:15pm # Hi Iwasbuildneuralmachinetranslationmodelbutthescoreiwasgetis0iamnotsurewhy Reply JasonBrownlee June15,2017at8:45am # Hereisagoodlistofthingstotry: http://machinelearningmastery.com/improve-deep-learning-performance/ Reply SarvottamPatel June20,2017at7:31pm # HHeyJason,firstofallthankyouverymuchfromthecoreofmyhearttomakemeunderstandthisperfectly,Ihaveanerroraftercompleting150iteration. File“keras_first_network.py”,line53,in print(“\n%s:%.2f”%(model.metrics_names[1]*100)) TypeError:notenoughargumentsforformatstring Reply SarvottamPatel June20,2017at8:05pm # SorrySirmybad,actuallyIwroteitwrongly Reply JasonBrownlee June21,2017at8:12am # Confirmthatyouhavecopiedthelineexactly: print("\n%s:%.2f%%"%(model.metrics_names[1],scores[1]*100)) 1 print("\n%s:%.2f%%"%(model.metrics_names[1],scores[1]*100)) Reply Joydeep June30,2017at4:15pm # HiDrJason, ThanksforthetutorialtogetstartedusingKeras. IusedthebelowsnippettodirectlyloadthedatasetfromtheURLratherthandownloadingandsavingasthismakesthecodemorestreamlinedwithouthavingtonavigateelsewhere. #loadpimaindiansdataset datasource=numpy.DataSource().open(“http://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data”) dataset=numpy.loadtxt(datasource,delimiter=”,”) Reply JasonBrownlee July1,2017at6:28am # Thanksforthetip. Reply Yvette July7,2017at9:01pm # Thanksforthishelpfulresource! Reply JasonBrownlee July9,2017at10:38am # I’mgladithelped. Reply Andeep July10,2017at1:14am # HiDrBrownlee, thankyouverymuchforthisgreattutorial! Iwouldbegrateful,ifyoucouldanswersomequestions: 1.Whatdoesthe7in“numpy.random.seed(7)”means? 2.InmycaseIhave3inputneuronsand2outputneurons.Isthecorrectnotation: X=dataset[:,0:3] Y=dataset[:,3:4]? 3.Thebatchsizemeanshowmanytrainingdataareusedinoneepoch,amIright? Ihavethoughtwehavetousethewholetrainingdatasetforthetraining.InthiscaseIwoulddeterminethebatchsizeasthenumberoftrainingdatapairsIhaveachievedthroughexperimentsetc..Inyourexample,doesthebatch(sized10)meansthatthecomputeralwaysusesthesame10trainingdataineveryepochorarethe10trainingdatarandomlychosenamongalltrainingdatabeforeeveryepoch? 4.Whenevaluatingthemodelwhatdoesthelossmeans(e.g.inloss:0.5105–acc:0.7396)? Isitthesumofvaluesoftheerrorfunction(e.g.mean_squared_error)oftheoutputneurons? Reply JasonBrownlee July11,2017at10:19am # Youcanuseanyrandomseedyoulike,morehere: http://machinelearningmastery.com/reproducible-results-neural-networks-keras/ Youarereferringtothecolumnsinyourdata.Yournetworkwillalsoneedtobeconfiguredwiththecorrectnumberofinputsandoutputs(e.g.inputandoutputlayers). Batchsizeisthenumberofsamplesinthedatasettoworkthroughbeforeupdatingnetworkweights.Oneepochiscomprisedofoneormorebatches. Lossisthetermbeingoptimizedbythenetwork.Hereweuselogloss: https://en.wikipedia.org/wiki/Cross_entropy Reply Andeep July16,2017at7:43am # Thankyouforyourresponse,DrBrownlee!! Reply JasonBrownlee July16,2017at8:00am # Ihopeithelps. Reply PatrickZawadzki July11,2017at5:35am # Isthereanywaytoseetherelationshipbetweentheseinputs?Essentiallyunderstandwhichinputsaffecttheoutputthemost,orperhapswhichpairsofinputsaffecttheoutputthemost? Maybepairingthiswithunsuperviseddeeplearning?Iwanttohavelessofa“blackbox”forthedevelopednetworkifatallpossible.Thankyouforyourgreatcontent! Reply JasonBrownlee July11,2017at10:34am # Yes,tryandRFE: http://machinelearningmastery.com/feature-selection-machine-learning-python/ Reply Bernt July13,2017at10:12pm # HiJason, Thankyouforsharingyourskillsandcompetence. Iwanttostudythechangeinweightsandpredictionsbetweeneachepochrun. Havetriedtousethemodel.train_on_batchmethodandthemodel.fitmethodwithepoch=1andbatch_sizeequalallthesamples. Butitseemslikethemodeldoesn’tsavethenewupdatedweights. IprintpredictionsbeforeandafterIdontseeachangeintheevaluationscores. Partsofthecodeisprintedbelow. Anyidea? Thanks. #Compilemodel model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’]) #evaluatethemodel scores=model.evaluate(X,Y) print(“\n%s:%.2f%%”%(model.metrics_names[1],scores[1]*100)) #RunoneupdateofthemodeltrainedrunwithXandcomparedwithY model.train_on_batch(X,Y) #Fitthemodel model.fit(X,Y,epochs=1,batch_size=768) scores=model.evaluate(X,Y) print(“\n%s:%.2f%%”%(model.metrics_names[1],scores[1]*100)) Reply JasonBrownlee July14,2017at8:29am # Sorry,IhavenotexploredevaluatingaKerasmodelthisway. Perhapsitisafault,IwouldrecommendpreparingthesmallestpossibleexamplethatdemonstratestheissueandposttotheKerasGitHubissues. Reply iman July18,2017at11:18pm # Hi,Itriedtoapplythistothetitanicdataset,howeverthepredictionswereall0.4.Whatdoyousuggestfor: #createmodel model=Sequential() model.add(Dense(12,input_dim=4,activation=’relu’)) model.add(Dense(4,activation=’relu’)) model.add(Dense(1,activation=’sigmoid’)) model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’])#’sgd’ model.fit(X,Y,epochs=15,batch_size=10) Reply JasonBrownlee July19,2017at8:26am # Thispostwillgiveyousomeideastolisttheskillofyourmodel: http://machinelearningmastery.com/improve-deep-learning-performance/ Reply Camus July19,2017at2:14am # HiDrJason, ThisisprobablyastupidquestionbutIcannotfindouthowtodoit…andIambeginneronNeuralNetwork. Ihaverelativelysamenumberofinputs(7)andoneoutput.Thisoutputcantakenumbersbetween-3000and+3000. IwanttobuildaneuralnetworkmodelinpythonbutIdon’tknowhowtodoit. Doyouhaveanexamplewithoutputsdifferentfrom0-1. Tanksinadvance Camus Reply JasonBrownlee July19,2017at8:28am # Ensureyouscaleyourdatathenusetheabovetutorialtogetstarted. Reply KhalidHussain July21,2017at11:28pm # HiJasonBrownlee Iamusingthesamedata“pima-indians-diabetes.csv”butallpredictedvaluesarelessthen1andareinfractionwhichcouldnotdistinguishanyclass. IfIroundoffthenallbecome0. Iamusingmodel.predict(x)function YouarerequestedtokindlyguidemewhatIamdoingwrongarehowcanIachievecorrectpredictedvalue. Thankyou Reply JasonBrownlee July22,2017at8:36am # Consideryouhavecopiedallofthecodeexactlyfromthetutorial. Reply Ludo July25,2017at6:59pm # HelloJason, Thanksyouforyourgreatexample.Ihavesomecomments. –Whyyouhavechoice“12”inputshiddenlayers?andnot24/32..it’sarbitary? –Samequestionaboutepochsandbatch_size? Thisvalueareverysensible!!ihavetrywith32inputsfirstlayer,epchos=500andbatch_size=1000andtheresultisverydifferents…i’amat65%accurancy. Thxforyouhelp. Regards. Reply JasonBrownlee July26,2017at7:50am # Yes,itisarbitrary.Tunetheparametersofthemodeltoyourproblem. Reply AlmoutasemBellahRajab July25,2017at7:32pm # Wow,you’restillreplyingtocommentsmorethanayearlater!!!…you’regreat,,thanks.. Reply JasonBrownlee July26,2017at7:50am # Yep. Reply Jane July26,2017at1:23am # Thanksforyourtutorial,IfounditveryusefultogetmestartedwithKeras.I’vepreviouslytriedTensorFlow,butfounditverydifficulttoworkwith.Idohaveaquestionforyouthough.IhavebothTheanoandTensorFlowinstalled,howdoIknowwhichback-endKerasisusing?Thanksagain Reply JasonBrownlee July26,2017at8:02am # Keraswillprintwhichbackendituseseverytimeyourunyourcode. YoucanchangethebackendintheKerasconfigurationfile(~/.keras/keras.json)whichlookslike: { "image_data_format":"channels_last", "backend":"tensorflow", "epsilon":1e-07, "floatx":"float32" } 123456 {    "image_data_format":"channels_last",    "backend":"tensorflow",    "epsilon":1e-07,    "floatx":"float32"} Reply MasoodImran July28,2017at12:00am # HelloJason, MyunderstandingofMachineLearningorevaluatingdeeplearningmodelsisalmost0.But,thisarticlegivesmelotofinformation.Itisexplainedinasimpleandeasytounderstandlanguage. Thankyouverymuchforthisarticle.WouldyousuggestanygoodreadtofurtherexploreMachineLearningordeeplearningmodelsplease? Reply JasonBrownlee July28,2017at8:31am # Thanks. Yes,startrighthere: http://machinelearningmastery.com/start-here/#deeplearning Reply Peggy August3,2017at7:14pm # IfIhavetrainedpredictionmodelsorneuralnetworkfunctionscripts.HowcanIusethemtomakepredictionsinanapplicationthatwillbeusedbyendusers?IwanttousepythonbutitseemsIwillhavetoredothetraininginPythonagain.IsthereawayIcanrewritethescriptsinPythonwithoutretrainingandjustcallthefunctionofpredicting? Reply JasonBrownlee August4,2017at6:58am # Youneedtotrainandsavethefinalmodelthenloadittomakepredictions. Thispostwillmakeitclear: http://machinelearningmastery.com/train-final-machine-learning-model/ Reply Shane August8,2017at2:38pm # Jason,Iusedyourtutorialtoinstalleverythingneededtorunthistutorial.Ifollowedyourtutorialandrantheresultingprogramsuccessfully.Canyoupleasedescribewhattheoutputmeans?Iwouldliketothankyouforyourveryinformativetutorials. Reply Shane August8,2017at2:39pm # 768/768[==============================]–0s–loss:0.4807–acc:0.7826 Epoch148/150 768/768[==============================]–0s–loss:0.4686–acc:0.7812 Epoch149/150 768/768[==============================]–0s–loss:0.4718–acc:0.7617 Epoch150/150 768/768[==============================]–0s–loss:0.4772–acc:0.7812 32/768[>………………………..]–ETA:0s acc:77.99% Reply JasonBrownlee August8,2017at5:12pm # Itissummarizingthetrainingofthemodel. Thefinallineevaluatestheaccuracyofthemodel’spredictions–reallyjusttodemonstratehowtomakepredictions. Reply JasonBrownlee August8,2017at5:11pm # WelldoneShane. Whichoutput? Reply Bene August9,2017at1:02am # HelloJason,ireallylikedyourWorkandithelpedmealotwithmyfirststeps. Butiamnotreallyfamiliarwiththenumpystuff: SohereismyQuestion: dataset=numpy.loadtxt(“pima-indians-diabetes.csv”,delimiter=”,”) #splitintoinput(X)andoutput(Y)variables X=dataset[:,0:8] Y=dataset[:,8] Igetthatthenumpy.loadtxtisextractingtheinformationfromthecvsFile butwhatdoesthestuffintheBracketsmeanlikeX=dataset[:,0:8] whythe“:”andwhy,0:8 itsprobablyprettydumbbutican’tfindagoodexplanationonline😀 thanksreallymuch! Reply JasonBrownlee August9,2017at6:37am # GoodquestionBene,it’scalledarrayslicing: https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html Reply Bene August9,2017at10:59pm # ThathelpedmeouttankyouJason🙂 Reply Chen August12,2017at5:43pm # CanItranslateittoChineseandputittoInternetinordertoletotherChinesepeoplecanreadyourarticle? Reply JasonBrownlee August13,2017at9:46am # No,pleasedonot. Reply DeepLearning August12,2017at7:36pm # Itseemsthatusingthisline: np.random.seed(5) …isredundanti.e.theKerasoutputinalooprunningthesamemodelwiththesameconfigurationwillyieldasimilarvarietyofresultsregardlessifit’ssetatall,orwhichnumberitissetto.OramImissingsomething? Reply JasonBrownlee August13,2017at9:52am # Deeplearningalgorithmsarestochastic(randomwithinarange).Thatmeansthattheywillmakedifferentpredictions/learndifferentthingswhenthesamemodelistrainedonthesamedata.Thisisafeature: http://machinelearningmastery.com/randomness-in-machine-learning/ Youcanfixtherandomseedtoensureyougetthesameresult,anditisagoodideafortutorialstohelpbeginnersout: http://machinelearningmastery.com/reproducible-results-neural-networks-keras/ Whenevaluatingtheskillofamodel,Iwouldrecommendrepeatingtheexperimentntimesandtakingskillastheaverageoftheruns.Seeherefortheprocedure: http://machinelearningmastery.com/evaluate-skill-deep-learning-models/ Doesthathelp? Reply DeepLearning August14,2017at3:08am # ThanksJason🙂 Itotallygetwhatitshoulddo,butasIhadpointedout,itdoesnotdoit.Ifyourunthecodesyouhaveprovidedaboveinaloopforsay10times.First10withrandomseedsetandtheother10timeswithoutthatlineofcodealltogether.Thencomparetheresult.AtleasttheresultI’mgetting,issuggestingtheeffectisnottherei.e.bothsetsof10timeswillhavesimilarvariationintheresult. Reply JasonBrownlee August14,2017at6:26am # Itmaysuggestthatthemodelisoverprescribedandeasilyaddressesthetrainingdata. Reply DeepLearning August14,2017at3:12am # Nicepostbytheway>http://machinelearningmastery.com/evaluate-skill-deep-learning-models/ Thanksforsharingit.Beenlatelythinkingabouttheaspectofaccuracyalot,itseemsthatatthemomentit’sa“hotmess”intermsofthewaycommontoolsdoitoutofthebox.IthinkalotofnonPhD/nonexpertcrowd(mostpeople)willatleastinitiallybeeasilyconfusedandmakethekindsofmistakesyoupointoutinyourpost. Thanksforalltheamazingcontributionsyouaremakinginthisfield! Reply JasonBrownlee August14,2017at6:26am # I’mgladithelped. Reply Haneesh December7,2019at10:36pm # HiJason, i’mactuallytryingtofind“spamfilterforquoraquestions”whereihaveadatasetwithlabel-0’sand1’sandquestionscolumns.pleaseletmeknowtheapproachandpathtobuildamodelforthis. Thanks Reply JasonBrownlee December8,2019at6:10am # Soundslikeagreatproject. Thetutorialshereontextclassificationwillhelp: https://machinelearningmastery.com/start-here/#nlp Reply RATNANITINPATIL August14,2017at8:16pm # HelloJason,Thanksforawonderfultutorial. CanIuseGeneticAlgorithmforfeatureselection?? Ifyes,Couldyoupleaseprovidethelinkforit??? Thanksinadvance. Reply JasonBrownlee August15,2017at6:34am # Sure.Sorry,Idon’thaveanyexamples. Generally,computersaresofastitmightbeeasiertotestallcombinationsinanexhaustivesearch. Reply sunny1304 August15,2017at3:44pm # HiJson, Thankyouforyourawesometutorial. Ihaveaquestionforyou. Isthereanyguidelineonhowtodecideonneuronnumberforournetwork. forexampleyouused12forthr1stlayerand8forthesecondlayer. howdoyoudecideonthat? Thanks Reply JasonBrownlee August15,2017at4:58pm # No,thereisnowaytoanalyticallydeterminetheconfigurationofthenetwork. Iusetrialanderror.Youcangridsearch,randomsearch,orcopyconfigurationsfromtutorialsorpapers. Reply yihadad August16,2017at6:53pm # HiJson, Thanksforawonderfultutorial. RunamodelgeneratedbyaCNNittakeshowmuchram,cpu? Thanks Reply JasonBrownlee August17,2017at6:39am # Itdependsonthedatayouareusingtofitthemodelandthesizeofthemodel. Verylargemodelscouldbe500MBofRAMormore. Reply Ankur September1,2017at3:15am # Hi, Pleaseletmeknow,howcanivisualisethecompleteneuralnetworkinKeras………………. Iamlookingforthecompletearchitecture–likenumberofneuronsintheInputLayer,hiddenlayer,outputlayerwithweights. Pleasehavealookatthelinkpresentbelow,heresomeonehascreatedabeutifulvisualisation/architectureusingneuralnetpackageinR. Pleaseletmeknow,canwecreatesuchtypeofmodelinKERAS https://www.r-bloggers.com/fitting-a-neural-network-in-r-neuralnet-package/ Reply JasonBrownlee September1,2017at6:50am # UsetheKerasvisualizationAPI: https://keras.io/visualization/ Reply ASAD October17,2017at3:23am # HelloANKUR,,,,howareyou? youhavetryvisualizationinkeraswhichissuggestedbyJasonBrownlee? ifyouhavetriedthenpleasesendmecodeiamalsotryingbutdidnotwork.. pleaseguideme Reply Adam September3,2017at1:45am # ThankyouDr.Brownleeforthegreattutorial, Ihaveaquestionaboutyourcode: istheargumentmetrics=[‘accuracy’]necessaryinthecodeanddoesitchangetheresultsoftheneuralnetworkorisitjustforshowingmetheaccuracyduringcompiling? thankyou!! Reply JasonBrownlee September3,2017at5:48am # No,itjustprintsouttheaccuracyofthemodelattheendofeachepoch.LearnmoreaboutKerasmetricshere: https://machinelearningmastery.com/custom-metrics-deep-learning-keras-python/ Reply PottOfGold September5,2017at12:14am # HiJason, yourworkhereisreallygreat.Ithelpedmealot. IrecentlystumbledupononethingIcannotunderstand: Forthepimasdatasetyoustate: <> WhenIlookatthetableofthepimasdataset,theexamplesareinrowsandthefeaturesincolumns,soyourinputdimensionisthenumberofcolumns.AsfarasIcansee,youdon’tchangethetable. Forneuralnetworks,isn’ttheinputnormally:examples=columns,features=rows? IsthisdifferentforKeras?OrcanIusebothshapes?Anifyes,what’sthedifferenceintheconstructionofthenet? Thankyou!! Reply JasonBrownlee September7,2017at12:36pm # No,featuresarecolumns,rowsareinstancesorexamples. Reply PottOfGold September7,2017at3:35pm # Thanks!🙂 Ihadalotofdiscussionsbecauseofthat. InAndrewNgnewCourseracourseit’sexplainedasexamples=columns,features=rows,buthedoesn’tuseKerasofcourse,butprogrammstheneuralnetworksfromscratch. Reply JasonBrownlee September9,2017at11:38am # Idoubtthat,Ithinkyoumayhavemixeditup.Columnsareneverexamples. Reply PottOfGold October6,2017at6:26pm # ThatswhatIthought,butIlookeditupinthenotationforthenewcourseracourse(deeplearning.ai)andthereitsays:misthenumerofexamplesinthedatasetandnistheinputsize,whereXsuperscriptnxmistheinputmatrix… Buteitherway,youhelpedme!Thankyou.🙂 LinLi September16,2017at1:50am # HiJason,thankyousomuchforyourtutorial,ithelpsmealot.Ineedyourhelpforthequestionbelow: Icopythecodeandrunit.AlthoughIgottheclassificationresults,thereweresomewarningmessagesintheprocess.Asfollows: Warning(fromwarningsmodule): File“C:\Users\llfor\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\callbacks.py”,line120 %delta_t_median) UserWarning:Methodon_batch_end()isslowcomparedtothebatchupdate(0.386946).Checkyourcallbacks. Idon’tknowwhy,andcannotfindanyanswertothisquestion.I’mlookingforwardtoyourreply.Thanksagain! Reply JasonBrownlee September16,2017at8:43am # Sorry,Ihavenotseenthismessagebefore.Itlookslikeawarning,youmightbeabletoignoreit. Reply LinLi September16,2017at12:24pm # Thanksforyourreply.I’mastart-learnerondeeplearning.I’dliketoputitasidetemporarily. Reply Sagar September22,2017at2:51pm # HiJason, Greatarticle,thumbsupforthat.IamgettingthiserrorwhenItrytorunthefileonthecommandprompt.Anysuggestions.Thanksforyouresponse. ####################################################################### C:\Work\ML>pythonkeras_first_network.py UsingTensorFlowbackend. 2017-09-2210:11:11.189829:WC:\tf_jenkins\home\workspace\rel-win\M\windows\PY\ 36\tensorflow\core\platform\cpu_feature_guard.cc:45]TheTensorFlowlibrarywasn ‘tcompiledtouseAVXinstructions,buttheseareavailableonyourmachineand couldspeedupCPUcomputations. 2017-09-2210:11:11.190829:WC:\tf_jenkins\home\workspace\rel-win\M\windows\PY\ 36\tensorflow\core\platform\cpu_feature_guard.cc:45]TheTensorFlowlibrarywasn ‘tcompiledtouseAVX2instructions,buttheseareavailableonyourmachinean dcouldspeedupCPUcomputations. 32/768[>………………………..]–ETA:0s acc:78.52% ####################################################################### Reply JasonBrownlee September23,2017at5:35am # Lookslikewarningmessagesthatyoucanignore. Reply Sagar September24,2017at3:52am # ThanksIgottoknowwhattheproblemwas.Accordingtosection6Ihadsetverboseargumentto0whilecalling“model.fit()”.Nowalltheepochsaregettingprinted. Reply JasonBrownlee September24,2017at5:17am # Gladtohearit. Reply Valentin September26,2017at6:35pm # HiJason, Thanksfortheamazingarticle.Clearandstraightforward. IhadsomeproblemsinstallingKerasbutwasadvisedtoprefix withtf.contrib.keras soIhavecodelike model=tf.contrib.keras.models.Sequential() Dense=tf.contrib.keras.layers.Dense NowItrytotrainKerasonsomesmalldatafiletoseehowthingsworkout: 1,1,0,0,8 1,2,1,0,4 1,0,0,1,5 1,0,1,0,7 0,1,0,0,8 1,4,1,0,4 1,0,2,1,1 1,0,1,0,7 Thefirst4columnsareinputsandthe5-thcolumnisoutput. Iusethesamecodefortraining(adjustnumberofinputs)asinyourarticle, butthenetworkonlygetsto12.5%accuracy. Anyadvise? Thanks, Valentin Reply JasonBrownlee September27,2017at5:40am # ThanksValentin. Ihaveagoodlistofsuggestionsforimprovingmodelperformancehere: http://machinelearningmastery.com/improve-deep-learning-performance/ Reply Priya October3,2017at2:28pm # HiJason, Itriedreplacingthepimadatawithrandomdataasfollows: X_train=np.random.rand(18,61250) X_test=np.random.rand(18,61250) Y_train=np.array([0.0,1.0,1.0,0.0,1.0,1.0,1.0,0.0,1.0, 0.0,1.0,0.0,0.0,1.0,0.0,0.0,0.0,1.0,]) Y_test=np.array([1.0,0.0,0.0,1.0,1.0,0.0,1.0,1.0,1.0, 1.0,0.0,0.0,0.0,1.0,0.0,0.0,1.0,0.0,]) _,input_size=X_train.shape#putthisininput_diminthefirstdenselayer Itooktheround()offofthepredictionssoIcouldseethefullvalueandtheninsertedmyrandomtestdatainmodel.fit(): predictions=model.predict(X_test) preds=[x[0]forxinpredictions] print(preds) model.fit(X_train,Y_train,epochs=100,batch_size=10,verbose=2,validation_data=(X_test,Y_test)) Ifoundsomethingslightlyodd;Iexpectedthepredictedvaluestobearound0.50,plusorminussome,butinstead,Igotthis: [0.49525392,0.49652839,0.49729034,0.49670222,0.49342978,0.49490061,0.49570397,0.4962129,0.49774086,0.49475089,0.4958384,0.49506786,0.49696651,0.49869373,0.49537542,0.49613148,0.49636957,0.49723724] whichisnear0.50butalwayslessthan0.50.Iranthisafewtimeswithdifferentrandomseeds,soit’snotcoincidental.Wouldyouhaveanyexplanationforwhyitdoesthis? Thanks, Priya Reply JasonBrownlee October3,2017at3:46pm # Perhapscalculatethemeanofyourtrainingdataandcompareittothepredictedvalue.Itmightbesimplesamplingerror. Reply Priya October4,2017at1:02am # IfoundoutIwasdoingpredictionsbeforefittingthemodel.(Isupposethatwouldmeanthenetworkhadn’tadjustedtothedata’sdistributionyet.) Reply Saurabh October7,2017at5:59am # HelloJason, Itriedtotrainthismodelonmylaptop,itisworkingfine.ButItriedtotrainthismodelongoogle-cloudwiththesameinstructionsasinyourexample-5.Butitisfailing. Canyoujustletmeknow,whichchangesaretorequiredforthemodel,sothatIcantrainthisoncloud. Reply JasonBrownlee October7,2017at7:37am # Sorry,Idon’tknowaboutgooglecloud. IhaveinstructionshereforrunningonAWS: https://machinelearningmastery.com/develop-evaluate-large-deep-learning-models-keras-amazon-web-services/ Reply tobegit3hub October12,2017at6:40pm # Greatpost.Thanksforsharing. Reply JasonBrownlee October13,2017at5:45am # You’rewelcome. Reply Manoj October12,2017at11:43pm # HiJason, Isthereawaytostorethemodel,onceitiscreatedsothatIcanuseitfordifferentinputdatasetsasandwhenneeded. Reply JasonBrownlee October13,2017at5:48am # Yes,youcansaveittofile.Seethistutorial: https://machinelearningmastery.com/save-load-machine-learning-models-python-scikit-learn/ Reply Cam October23,2017at6:11pm # Igetasyntaxerrorforthe model.fit()lineinthisexample.Isitduetolibraryconflictswiththeanoandtensorflowifihavebothinstalled? Reply JasonBrownlee October24,2017at5:28am # Perhapsensureyourenvironmentisuptodateandthatyoucopiedthecodeexactly. Thistutorialcanhelpwithsettingupyourenvironment: http://machinelearningmastery.com/setup-python-environment-machine-learning-deep-learning-anaconda/ Reply Cam October24,2017at2:11pm # Thanks,fixed! Reply JasonBrownlee October24,2017at4:01pm # Gladtohearit. Reply DiegoQuintana October25,2017at7:37am # HiJason,thanksfortheexample. HowwouldyoupredictasingleelementfromX?X[0]raisesaValueError ValueError:Errorwhenchecking:expecteddense_1_inputtohaveshape(None,8)butgotarraywithshape(8,1) Thanks! Reply JasonBrownlee October25,2017at3:56pm # Youcanreshapeittohave1rowand8columns: X=X.reshape((1,8)) 1 X=X.reshape((1,8)) Thispostwillgiveyoufurtheradvice: https://machinelearningmastery.com/index-slice-reshape-numpy-arrays-machine-learning-python/ Reply harald April10,2019at8:26pm # Shoulditbe:X[0].reshape((1,8))? Reply JasonBrownlee April11,2019at6:35am # Yep! Reply ShahbazWasti October28,2017at1:30pm # DearSir, Ihaveinstalledandconfiguredtheenvironmentaccordingtoyourdirectionsbutwhilerunningtheprogramihavefollowingerror “fromkeras.utilsimportnp_utils” Reply JasonBrownlee October29,2017at5:50am # Whatistheerrorexactly? Reply Zhengping October30,2017at12:12am # HiJason,thanksforthegreattutorials.Ijustlearntandrepeatedtheprograminyour“YourFirstMachineLearningProjectinPythonStep-By-Step”withoutproblem.Nowtryingthisone,gettingstuckattheline“model=Sequential()”whentheInteractivewindowthrows:NameError:name‘Sequential’isnotdefined.triedtogoogle,can’tfindasolution.IdidimportSequentialfromkeras.modelsasinurexamplecode.copypastedasitis.Thanksinadvanceforyourhelp. Reply Zhengping October30,2017at12:14am # I’mrunningurexamplesinAnaconda4.4.0environmentinvisualstudiocommunityversion.relevantpackageshavebeeninstalledasinurearliertutorialsinstructed. Reply Zhengping October30,2017at12:18am # >>#createmodel …model=Sequential() … Traceback(mostrecentcalllast): File“”,line2,in NameError:name‘Sequential’isnotdefined >>>model.add(Dense(12,input_dim=8,init=’uniform’,activation=’relu’)) Traceback(mostrecentcalllast): File“”,line1,in AttributeError:‘SVC’objecthasnoattribute‘add’ Reply JasonBrownlee October30,2017at5:39am # Thisdoesnotlookgood.Perhapsposttheerrortostackexchangeorotherkerassupport.Ihavealistofkerassupportsiteshere: https://machinelearningmastery.com/get-help-with-keras/ Reply JasonBrownlee October30,2017at5:38am # LookslikeyouneedtoinstallKeras.Ihaveatutorialhereonhowtodothat: https://machinelearningmastery.com/setup-python-environment-machine-learning-deep-learning-anaconda/ Reply Akhil October30,2017at5:04pm # HoJason, Thanksalotforthiswonderfultutorial. Ihaveaquestion: Iwanttouseyourcodetopredicttheclassification(1or0)ofunknownsamples.ShouldIcreateonecommoncsvfilehavingthetrain(known)aswellasthetest(unknown)data.Whereasthe‘classification’columnfortheknowndatawillhaveaknownvalue,1or0,fortheunknowndata,shouldIleavethecolumnempty(andletthecodedecidetheoutcome)? Thanksalot Reply JasonBrownlee October31,2017at5:29am # Greatquestion. No,youonlyneedtheinputsandthemodelcanpredicttheoutputs,callmodel.predict(X). Also,thispostwillgiveageneralideaonhowtofitafinalmodel: https://machinelearningmastery.com/train-final-machine-learning-model/ Reply Guilherme November3,2017at1:26am # HiJason, Thisisreallycool!Iamblownaway!Thankssomuchformakingitsosimpleforabeginnertohavesomehandson.Ihaveacouplequestions: 1)wherearetheweights,canIsaveand/orretrievethem? 2)ifIwanttotrainimageswithdogsandcatsandlaterasktheneuralnetworkwhetheranewimagehasacatoradog,howdoIgetmyinputimagetopassasanarrayandmyoutputresulttobe“cat”or“dog”? Thanksagainandgreatjob! Reply JasonBrownlee November3,2017at5:20am # Theweightsareinthemodel,youcansavethem: https://machinelearningmastery.com/save-load-keras-deep-learning-models/ Yes,youwouldsaveyourmodel,thencallmodel.predict()onthenewdata. Reply Michael November5,2017at8:33am # HiJason, Areyoufamiliarwithapythontool/packagethatcanbuildneuralnetworkasinthetutorial,butsuitablefordatastreammining? Thanks, Michael Reply JasonBrownlee November6,2017at4:46am # Notreally,sorry. Reply bea November8,2017at1:58am # Hi,there.Couldyoupleaseclarifywhyexactlyyou’vebuiltyournetworkwith12neuronsinthefirstlayer? “Thefirstlayerhas12neuronsandexpects8inputvariables.Thesecondhiddenlayerhas8neuronsandfinally,theoutputlayerhas1neurontopredicttheclass(onsetofdiabetesornot)…” Should’ntithave8neuronsatthestart? Thanks Reply JasonBrownlee November8,2017at9:28am # Theinputlayerhas8,thefirsthiddenlayerhas12.Ichose12throughalittletrialanderror. Reply Guilherme November9,2017at12:54am # HiJason, Doyouhaveorelsecouldyourecommendabeginner’slevelimagesegmentationapproachthatusesdeeplearning?Forexample,Iwanttotrainsomeneuralnettoautomatically“find”aparticularfeatureoutofanimage. Thanks! Reply JasonBrownlee November9,2017at10:00am # Sorry,Idon’thaveimagesegmentationexamples,perhapsinthefuture. Reply Andy November12,2017at6:56pm # HiJason, IjuststartedmyDLtrainingafewweeksago.AccordingtowhatIlearnedincourse,inordertotraintheparametersfortheNN,weneedtoruntheForwardandBackwardpropagation;however,lookingatyourKerasexample,idon’tfindanyofthesepropagationprocesses.DoesitmeanthatKerashasitsownmechanismtofindtheparametersinsteadofusingForwardandBackwardpropagation? Thanks! Reply JasonBrownlee November13,2017at10:13am # Itisperformingthoseoperationsunderthecoversforyou. Reply Badr November13,2017at11:42am # HiJason, CanyouexplainwhyIgotthefollowingoutput: ValueErrorTraceback(mostrecentcalllast) in() —->1model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’]) 2model.fit(X,Y,epochs=150,batch_size=10) 3scores=model.evaluate(X,Y) 4print(“\n%s:%.2f%%”%(model.metrics_names[1],scores[1]*100)) /Users/badrshomrani/anaconda/lib/python3.5/site-packages/keras/models.pyincompile(self,optimizer,loss,metrics,sample_weight_mode,**kwargs) 545metrics=metrics, 546sample_weight_mode=sample_weight_mode, –>547**kwargs) 548self.optimizer=self.model.optimizer 549self.loss=self.model.loss /Users/badrshomrani/anaconda/lib/python3.5/site-packages/keras/engine/training.pyincompile(self,optimizer,loss,metrics,loss_weights,sample_weight_mode,**kwargs) 620loss_weight=loss_weights_list[i] 621output_loss=weighted_loss(y_true,y_pred, –>622sample_weight,mask) 623iflen(self.outputs)>1: 624self.metrics_tensors.append(output_loss) /Users/badrshomrani/anaconda/lib/python3.5/site-packages/keras/engine/training.pyinweighted(y_true,y_pred,weights,mask) 322defweighted(y_true,y_pred,weights,mask=None): 323#score_arrayhasndim>=2 –>324score_array=fn(y_true,y_pred) 325ifmaskisnotNone: 326#CastthemasktofloatXtoavoidfloat64upcastingintheano /Users/badrshomrani/anaconda/lib/python3.5/site-packages/keras/objectives.pyinbinary_crossentropy(y_true,y_pred) 46 47defbinary_crossentropy(y_true,y_pred): —>48returnK.mean(K.binary_crossentropy(y_pred,y_true),axis=-1) 49 50 /Users/badrshomrani/anaconda/lib/python3.5/site-packages/keras/backend/tensorflow_backend.pyinbinary_crossentropy(output,target,from_logits) 1418output=tf.clip_by_value(output,epsilon,1–epsilon) 1419output=tf.log(output/(1–output)) ->1420returntf.nn.sigmoid_cross_entropy_with_logits(output,target) 1421 1422 /Users/badrshomrani/anaconda/lib/python3.5/site-packages/tensorflow/python/ops/nn_impl.pyinsigmoid_cross_entropy_with_logits(_sentinel,labels,logits,name) 147#pylint:disable=protected-access 148nn_ops._ensure_xent_args(“sigmoid_cross_entropy_with_logits”,_sentinel, –>149labels,logits) 150#pylint:enable=protected-access 151 /Users/badrshomrani/anaconda/lib/python3.5/site-packages/tensorflow/python/ops/nn_ops.pyin_ensure_xent_args(name,sentinel,labels,logits) 1696ifsentinelisnotNone: 1697raiseValueError(“Onlycall%swith” ->1698“namedarguments(labels=…,logits=…,…)”%name) 1699iflabelsisNoneorlogitsisNone: 1700raiseValueError(“Bothlabelsandlogitsmustbeprovided.”) ValueError:Onlycallsigmoid_cross_entropy_with_logitswithnamedarguments(labels=…,logits=…,…) Reply JasonBrownlee November14,2017at10:05am # Perhapsdoublecheckyouhavethelatestversionsofthekerasandtensorflowlibrariesinstalled?! Reply Badr November14,2017at10:50am # keraswasoutdated Reply JasonBrownlee November15,2017at9:44am # Gladtohearyoufixedit. Reply Mikael November22,2017at8:20am # HiJason,thanksforyourshorttutorial,helpsalottoactuallygetyourhandsdirtywithasimpleexample. Ihavetried5differentparametersandgotsomeinterestingresultstoseewhatwouldhappen.Unfortunately,Ididntrecordrunningtime. Test1 Test2 Test3 Test4 Test5 Test6 Test7 numberoflayers 3 3 3 3 3 3 4 Trainset 768 768 768 768 768 768 768 Iterations 150 100 1000 1000 1000 150 150 Rateofupdate 10 10 10 5 1 1 5 Errors 173 182 175 139 161 169 177 Values 768 768 768 768 768 768 768 %Error 23,0000% 23,6979% 22,7865% 18,0990% 20,9635% 22,0052% 23,0469% Ican’tseemtoseeatrendhere..Thatcouldputmeontherighttracktoadjustmyhyperparameters. Doyouhaveanyadviceonthat? Reply JasonBrownlee November22,2017at11:17am # Somethingiswrong.Hereisagoodlistofthingstotry: http://machinelearningmastery.com/improve-deep-learning-performance/ Reply Nikolaos November28,2017at10:58am # Hi,Itrytoimplementtheaboveexamplewithfer2013.csvbutIreceiveanerror,itispossibletohelpmetoimplementthiscorrectly? keras.modelsimportSequential fromkeras.layersimportDense importnumpy importnumpyasnp #fixRandomseedforreproducibility numpy.random.seed(7) Y=[] X=[] #loaddataset forlineinopen("fer2013.csv"): row=line.split(',') Y.append(int(row[0])) X.append([int(p)forpinrow[1].split()]) X,Y=np.array(X)/255.0,np.array(Y) print(Y.shape) print(X.shape) #createmodel model=Sequential() model.add(Dense(12,input_dim=(35887,2304),activation='tanh')) model.add(Dense(8,activation='tanh')) model.add(Dense(1,activation='sigmoid')) #CompileModel model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) #FitModel model.fit(X,Y,epochs=150,batch_size=1) #evaluatethemodel scores=model.evaluate(X,Y) print("\n%s:%.2f%%"%(model.metrics_names[1],scores[1]*100)) #calculatepredictions predictions=model.predict(X) #roundpredictions rounded=[round(x[0])forxinpredictions] print(rounded) 12345678910111213141516171819202122232425262728293031323334353637383940 keras.modelsimportSequentialfromkeras.layersimportDenseimportnumpyimportnumpyasnp #fixRandomseedforreproducibilitynumpy.random.seed(7)Y=[]X=[]#loaddatasetforlineinopen("fer2013.csv"):    row=line.split(',')    Y.append(int(row[0]))    X.append([int(p)forpinrow[1].split()])X,Y=np.array(X)/255.0,np.array(Y)print(Y.shape)print(X.shape)  #createmodelmodel=Sequential()model.add(Dense(12,input_dim=(35887,2304),activation='tanh'))model.add(Dense(8,activation='tanh'))model.add(Dense(1,activation='sigmoid')) #CompileModelmodel.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) #FitModelmodel.fit(X,Y,epochs=150,batch_size=1) #evaluatethemodelscores=model.evaluate(X,Y)print("\n%s:%.2f%%"%(model.metrics_names[1],scores[1]*100)) #calculatepredictionspredictions=model.predict(X)#roundpredictionsrounded=[round(x[0])forxinpredictions]print(rounded) Reply JasonBrownlee November29,2017at8:10am # Sorry,Icannotdebugyourcode. Whatistheproblemexactly? Reply Tanya December2,2017at12:06am # Hello, ihaveaabitgeneralquestion. Ihavetodoaforecastingforrestaurantsales(meaningthatIhavetopredict4mealsbasedonahistoricaldailysalesdata),weathercondition(suchastemperature,rain,etc),officialholidayandin-off-season.Ihavetoperformthatforecastingusingneuronalnetworks. Iamunfortunatelynotaveryskilledinpython.OnmycomputerIhavePython2.7andIhaveinstallanaconda.Iamtryingtolearnexercisingwithyourcodes,Mr.Brownlee.ButsomehowIcannotrunthecodeatall(inSpyder).CanyoutellmewhatkindofversionofpythonandanacondaIhavetoinstallonmycomputerandinwhichenvironment(jupiterlab,notebook,qtconsole,spyder,etc)Icanrunthecode,sotoworkandnottogiveerrorfromtheverybeginning? Iwillbeverythankfulforyourresponse KG Tanya Reply JasonBrownlee December2,2017at9:02am # Perhapsthistutorialwillhelpyousetupandconfirmyourenvironment: http://machinelearningmastery.com/setup-python-environment-machine-learning-deep-learning-anaconda/ IwouldalsorecommendrunningcodefromthecommandlikeasIDEsandnotebookscanintroduceandhideerrors. Reply Eliah December3,2017at10:53am # HiDr.Brownlee. IlookedoverthetutorialandIhadaquestionregardingreadingthedatafromabinaryfile?ForinstanceIworkingonsolvingtheslidingtiledn-puzzleusingneuralnetworks,butIseemtohavetroubletogettingmydatawhichisinabinaryfileanditgeneratesthenumberofmoverequiredforthen-puzzletobesolvein.Amnotsureifyouhavedealtwiththisbefore,butanyhelpwouldbeappreciated. Reply JasonBrownlee December4,2017at7:43am # Sorry,Idon’tknowaboutyourbinaryfile. Perhapsafteryouloadyourdata,youcanconvertittoanumpyarraysothatyoucanprovideittoaneuralnet? Reply Eliah December4,2017at9:28am # Thanksforthetip,I’lltryit. Reply Wafaa December7,2017at4:59pm # Thankyouveryverymuchforallyourgreattutorials. IfIwantedtoaddbatchlayeraftertheinputlayer,howshouldIdoit? CuzIappliedthistutorialonadifferentdatasetandfeaturesandIthinkIneednormalizationorstandardizationandIwanttodoittheeasiestway. Thankyou, Reply JasonBrownlee December8,2017at5:35am # Irecommendpreparingthedatapriortofittingthemodel. Reply zaheer December9,2017at3:03am # thanksforsharingsuchnicetutorials,ithelpedmealot.iwanttoprinttheconfusionmatrixfromtheaboveexample.andonemorequestion. ifihave 20-inputvariable 1-classlabel(binary) and400instances howiwouldknow,settingupthedenselayerparameterinthefirstlayerandhiddenlayerandoutputlayer.likeaboveexampleyouhaveplaced.12,8,1 Reply JasonBrownlee December9,2017at5:44am # Irecommendtrialanderrortoconfigurethenumberofneuronsinthehiddenlayertoseewhatworksbestforyourspecificproblem. Reply zaheer December9,2017at3:29am # C:\Users\zaheer\AppData\Local\Programs\Python\Python36\python.exeC:/Users/zaheer/PycharmProjects/PythonBegin/Bin-CLNCL-Copy.py UsingTensorFlowbackend. Traceback(mostrecentcalllast): File“C:/Users/zaheer/PycharmProjects/PythonBegin/Bin-CLNCL-Copy.py”,line28,in model.fit(x_train,y_train,epochs=100,batch_size=100) File“C:\Users\zaheer\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\models.py”,line960,infit validation_steps=validation_steps) File“C:\Users\zaheer\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\training.py”,line1574,infit batch_size=batch_size) File“C:\Users\zaheer\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\training.py”,line1407,in_standardize_user_data exception_prefix=’input’) File“C:\Users\zaheer\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\engine\training.py”,line153,in_standardize_input_data str(array.shape)) ValueError:Errorwhencheckinginput:expecteddense_1_inputtohaveshape(None,20)butgotarraywithshape(362,1) Reply JasonBrownlee December9,2017at5:45am # Ensuretheinputshapematchesyourdata. Reply AnamZahra December10,2017at7:40pm # DearJason!Greatjobaverysimpleguide. Iamtryingtoruntheexactcodebutthereisaneror str(array.shape)) ValueError:Errorwhencheckingtarget:expecteddense_3tohaveshape(None,1)butgotarraywithshape(768,8) HowcanIresolve. Ihavewindows10andspyder. Reply JasonBrownlee December11,2017at5:24am # Sorrytohearthat,perhapsconfirmthatyouhavethelatestversionofNumpyandKerasinstalled? Reply nazekhassouneh December11,2017at7:33am # afterrunthiscode,iwillcalculatetheaccuracy,howidid,i iwanttosplitthedatasetintotestdata,trainingdata andevaluatethemodelandcalculatetheaccuracy thankdr. Reply Suchith December21,2017at2:35pm # Inthemodelhowmanyhiddenlayersarethere? Reply JasonBrownlee December21,2017at3:35pm # Thereare2hiddenlayers,1inputlayerand1outputlayer. Reply AmareMahtesenu December22,2017at9:55am # hithere.thisblogisveryawesomeliketheAdrian’spyimagesearchblog.IhaveonequestionandthatisdoyouhaveorwillyouhaveatutorialonkerasframeworkwithSSDorYoloarchitechtures? Reply JasonBrownlee December22,2017at4:16pm # Thanksforthesuggestion,Ihopetocovertheminthefuture. Reply KyujinChae January8,2018at2:22pm # Thanksforyourawesomearticle. Iamreallyenjoying ‘MachineLearningMastery’!! Reply JasonBrownlee January8,2018at3:54pm # Thanks! Reply LuisGaldo January9,2018at8:41am # HelloJason! Thisisanawesomearticle! IamwritingareportforasubjectinuniversityandIhaveusedyourcodeduringmyimplementation,woulditbepossibletocitethispostinbibtex? Thankyou! Reply JasonBrownlee January9,2018at3:17pm # Sure,youcancitethewebpagedirectly. Reply NikhilGupta January25,2018at8:05pm # Myquestionisregardingpredict.Iusedtogetdecimalsinthepredictionarray.Suddenly,IstartedseeingonlyIntegers(0or1)intherun.Anyideawhatcouldbecausingthechange? predictions=model.predict(X2) predictions Out[3]: array([[0.], [0.], [0.], …, [0.], [0.], [0.]],dtype=float32) Reply JasonBrownlee January26,2018at5:39am # Perhapschecktheactivationfunctionontheoutputlayer? Reply NikhilGupta January28,2018at3:30am # #createmodel.FullyconnectedlayersaredefinedusingtheDenseclass model=Sequential() model.add(Dense(12,input_dim=len(x_columns),activation=’relu’))#12neurons,8inputs model.add(Dense(8,activation=’relu’))#Hiddenlayerwith8neurons model.add(Dense(1,activation=’sigmoid’))#1outputlayer.Sigmoidgive0/1 Reply joe January27,2018at1:25am # ==================RESTART:/Users/apple/Documents/deep1.py================== UsingTensorFlowbackend. Traceback(mostrecentcalllast): File“/Users/apple/Documents/deep1.py”,line20,in model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’]) File“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/keras/models.py”,line826,incompile **kwargs) File“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/keras/engine/training.py”,line827,incompile sample_weight,mask) File“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/keras/engine/training.py”,line426,inweighted score_array=fn(y_true,y_pred) File“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/keras/losses.py”,line77,inbinary_crossentropy returnK.mean(K.binary_crossentropy(y_true,y_pred),axis=-1) File“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py”,line3069,inbinary_crossentropy logits=output) TypeError:sigmoid_cross_entropy_with_logits()gotanunexpectedkeywordargument‘labels’ >>> Reply JasonBrownlee January27,2018at5:58am # Ihavenotseemthiserror,sorry.Perhapstrypostingtostackoverflow? Reply Atefeh January27,2018at4:04pm # HelloMr.Janson AfterinstallingAnacondaanddeeplearninglibraries,IreadyourFreemini-courseandItriedtowritethecodeaboutthehandwrittendigitrecognition. Iwrotethecodesinjupyternotebook,amIright? ifnotwhereshouldIwritethecodes? andifIwanttouseanotherdataset(myowndataset)howcanIuseinthecode? andhowcanIseetheresult,forexampletheaccuracypercentage? Iamreallysorryformysimplequestions!Ihavewrittenalotofcodein“Matlab”butIamreallyabeginnerinPythonandAnaconda,myteacherforcemetousePythonandkerasformyproject. thankyouverymuchforyourhelp Reply JasonBrownlee January28,2018at8:22am # Anotebookisfine. YoucanwritecodeinaPythonscriptandthenrunthescriptdirectly. Reply Atefeh January28,2018at12:01am # HelloMr.Jansonagain IwrotethecodebelowfromyourFreeminicourseforhandwrittendigitrecognition,butafterrunningIfacedthesyntaxerror: fromkeras.datasetsimportmnist … (X_train,y_train),(X_test,y_test)=mnist.load_data() X_train=X_train.reshape(X_train.shape[0],1,28,28) X_test=X_test.reshape(X_test.shape[0],1,28,28) fromkeras.utilsimportnp_utils … y_train=np_utils.to_categorical(y_train) y_test=np_utils.to_categorical(y_test) model=Sequential() model.add(Conv2D(32,(3,3),padding=’valid’,input_shape=(1,28,28), activation=’relu’)) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Flatten()) model.add(Dense(128,activation=’relu’)) model.add(Dense(num_classes,activation=’softmax’)) model.compile(loss=’categorical_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’]) File“”,line2 2model.add(Conv2D(32,(3,3),padding=’valid’,input_shape=(1,28,28), ^ SyntaxError:invalidsyntax wouldyoupleasehelpme?! thanksalot Reply JasonBrownlee January28,2018at8:25am # This: model.add(Conv2D(32,(3,3),padding=’valid’,input_shape=(1,28,28), activation=’relu’)) 12 model.add(Conv2D(32,(3,3),padding=’valid’,input_shape=(1,28,28),activation=’relu’)) shouldbe: model.add(Conv2D(32,(3,3),padding=’valid’,input_shape=(1,28,28),activation=’relu’)) 1 model.add(Conv2D(32,(3,3),padding=’valid’,input_shape=(1,28,28),activation=’relu’)) Reply Lila January29,2018at8:04am # Thankyoufortheawsomeblogandexplanations.Ihavejustaquestion:Howcanwegetpredictedvaluesbythemodel..Manythanks Reply JasonBrownlee January29,2018at8:21am # Asfollows: X=... yhat=model.predict(X) 12 X=...yhat=model.predict(X) Reply Lila January30,2018at1:22am # Thankyouforyourpromptanswer.IamtryingtolearnhowkerasmodelsworkandIused.Itrainedthemodellikethis: model.compile(loss=’mean_squared_error’,optimizer=’sgd’,metrics=[‘MSE’]) AsoutputIhavethoselines Epoch10000/10000 10/200[>………………………..]–ETA:0s–loss:0.2489–mean_squared_error:0.2489 200/200[==============================]–0s56us/step–loss:0.2652–mean_squared_error:0.2652 andmyquestionwhatthedifferencebetweenthetwolines(MSEvalues) Reply JasonBrownlee January30,2018at9:53am # Theyshouldbethesamething.Onemaybecalculatedattheendofeachbatch,andoneattheendofeachepoch. Reply Atefeh January30,2018at4:28am # hello afterrunningagainitshowanerror: NameErrorTraceback(mostrecentcalllast) in() —->1model=Sequential() 2model.add(Conv2D(32,(3,3),padding=’valid’,input_shape=(1,28,28),activation=’relu’)) 3model.add(MaxPooling2D(pool_size=(2,2))) 4model.add(Flatten()) 5model.add(Dense(128,activation=’relu’)) NameError:name‘Sequential’isnotdefined Reply JasonBrownlee January30,2018at9:55am # Youaremissingtheimports.Ensureyoucopyallcodefromthecompleteexampleattheend. Reply Atefeh January31,2018at1:02am # fromkeras.datasetsimportmnist … (X_train,y_train),(X_test,y_test)=mnist.load_data() X_train=X_train.reshape(X_train.shape[0],1,28,28) X_test=X_test.reshape(X_test.shape[0],1,28,28) fromkeras.utilsimportnp_utils … y_train=np_utils.to_categorical(y_train) y_test=np_utils.to_categorical(y_test) model=Sequential() 2model.add(Conv2D(32,(3,3),padding=’valid’,input_shape=(1,28,28),activation=’relu’)) 3model.add(MaxPooling2D(pool_size=(2,2))) 4model.add(Flatten()) 5model.add(Dense(128,activation=’relu’)) 6model.add(Dense(num_classes,activation=’softmax’)) 7model.compile(loss=’categorical_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’]) Reply Atefeh February2,2018at5:01am # hello pleasetellmehowcanIfindoutthattensorflowandkerasarecorrectlyinstalledonmysystem. maybetheproblemisthat,becausenocoderunsinmyjupyter.andno“import”actswell(forexampleimportpandas) thankyou Reply JasonBrownlee February2,2018at8:23am # Seethispost: https://machinelearningmastery.com/setup-python-environment-machine-learning-deep-learning-anaconda/ Reply Dan February3,2018at12:29am # Hi.I’mtotallynewtomachinelearningandI’mtryingtowrapmyheadaroundit. IhaveaproblemIcan’tquitesolveyet.Anddon’tknowwheretostartactually. Ihaveadictionarywithafewkey:valuepairs.Thekeyisarandom4digitnumberfrom0000to9999.Andthevalueforeachkeyissetasfollows:ifadigitinanumberiseither0,6or9thenitsweightis1,ifadigitis8thenit’sweightis2,anyotherdigithasaweightof0.Alltheweightsaresummarisedthenandhereyouhavethevalueforthekey.(example:{‘0000’:4,‘1234’:0,‘1692’:2,‘8800’:6}–andsoon). NowI’mtryingtobuildamodelthatwillpredictthecorrectvalueofagivenkey.(i.eifIgiveit2222theansweris0,ifIgiveit9011–it’s2).WhatIdidfirstiscreatedaCSVfilewith5columns,firstfourisasplit(byasingledigit)keyfrommydictionary,andthefifthcolumnisthevalueforeachkey.NextIcreatedadatasetanddefinedamodel(likethistutorialbutwithinput_dim=4).NowwhenItrainthemodeltheaccuracywon’tgohigherthen~30%.Alsoyourmodelisbasedonbinaryoutput,whereasmineshouldhaveanintegerfrom0to8.WheredoIgofromhere? Thankyouforallyoureffortinadvance!🙂 Reply JasonBrownlee February3,2018at8:42am # Thispostmighthelpyounaildownyourproblemasapredictivemodelingproblem: http://machinelearningmastery.com/how-to-define-your-machine-learning-problem/ Reply Alex February5,2018at5:22am # ThereisonethingIjustdontget. Anexampleofrowdatais6,148,72,35,0,33.6,0.627,50,1 Iguessthenumberattheendisifthepersonhasdiabetes(1)ordoesnot(0),butwhatIdontunderstandishowIknowthe‘prediction’isaboutthat0or1,teherearealotofothervariablesinthedata,andIdontsee‘diabetes’beingalabelforanyofthat. So,howdoIknoworhowdoIsetwichvariable(number)Iwanttopredict? Reply JasonBrownlee February5,2018at7:49am # Youinterpretthepredictioninyourapplicationorusage. Themodeldoesnotcarewhattheinputsandoutputsare,itdoesthebestitcan.Itdoesnotintrinsicallycareaboutdiabetes. Reply blaisexen February6,2018at9:14am # hi, @JasonBrownlee,MasterofKerasPython. I’mdevelopingafacerecognitiontesting,IsuccessfullyusedRprop,itwasgoodforstaticimagesorfacepictures,Ialsohavetestsvmresults. WhatdoyouthinkinyourexperiencedthatKerasisbetterorpowerfulthanRprop? becauseIwasalsothinkingtousedKeras(1:1)forfinalresultofRprop(1:many). orwhichdoyouthinkisbettersystem? thanksinadvancefortheadvices. IalsoheardoneoftheleaderofcommercialfacerecognizersusesPNN(useslibopenblas),soIreallydoubtwhichonetochooseformyfinalthesisandapplication. Reply JasonBrownlee February6,2018at9:29am # Whatdoyoumeanbyrprop?Ibelieveitisjustanoptimizationalgorithm,whereasKerasisadeeplearninglibrary. https://en.wikipedia.org/wiki/Rprop Reply blaisexen February17,2018at10:46am # Ok,IthinkIunderstandyou. IusedAccord.Net Rproptestingwasgood MLRtestingwasgood SVMtestingwasgood RBMtestingwasgood Iusedclassificationforfaceimages Theyareonlygoodforstaticfacepictures100×100 butifIusedanotherpicturefromthem, these4testingIhavefailed. DoyouthinkifIusedKerasinimagefacerecognitionwillhaveagoodresultorgoodprediction? becauseifKeraswillhaveagoodresultthenI’llhavetousedcesarsouzakerasc# https://github.com/cesarsouza/keras-sharp thanksforthereply. Reply JasonBrownlee February18,2018at6:45am # Tryitandsee. Reply CHIRANJEEVI February8,2018at8:52pm # Whatisthedifferencebetweentheaccuracywegetwhenwefitthemodelandtheaccuracy_score()ofsklearn.metrics,whattheymeanexactly? Reply JasonBrownlee February9,2018at9:05am # Accuracyisasummaryofthenumberofpredictionsthatweremadecorrectlyoutofallpredictionsthatweremade. Itisusedasanestimateofmodelskillonnewoutofsampledata. Reply Shinan February8,2018at9:09pm # isweatherforecastingcandoneusingRNN? Reply JasonBrownlee February9,2018at9:06am # No.Weatherforecastingisdonewithensemblesofphysicssimulationsonverylargecomputers. Reply CHIRANJEEVI February9,2018at3:56pm # wehaven’tpredictinganytingduringthefit(itsjustatraining,likemappingF(x)=Y) butstillgettingacc,whatisthisacc? Epoch1/150 768/768[==============================]–1s1ms/step–loss:0.6771–acc:0.6510 Thankyouinadvance Reply JasonBrownlee February10,2018at8:50am # Predictionsaremadeaspartofbackpropagatingerror. Reply lcy1031 February12,2018at1:00pm # HiJason, Manythankstoyouforagreattutorial.Ihavecouplequestionstoyouasfollowings. 1).HowcanIgetthescoreofPrediction? 2).HowcanIoutputtheresultofpredictruntoafileinwhichtheoutputislistedbyvertical? Iseeyoueverywheretoanswerquestionsandhelppeople.Yourtimeandpatienceweregreatlyappreciated! Charles Reply JasonBrownlee February12,2018at2:50pm # Youcanmakepredictionswithamodelasfollows: yhat=model.predict(X) Youcanthensavethenumpyarrayresulttofile. Reply Callum February21,2018at10:11am # HiI’vejustfinishedthistutorialbuttheonlyproblemiswhatareweactuallyfindingintheresultsasinwhatdoaccuracyandlossmeanandwhatweareactuallyfindingout. I’mreallynewtothewholeneuralnetworksthinganddon’treallyunderstandthemyet,I’dbeverygratefulifyou’reabletoreply ManyThanks Callum Reply JasonBrownlee February22,2018at11:12am # Accuracyisthemodelskillintermsofthenumberofcorrectpredictionsdividedbythetotalnumberofpredictions. Lossthefunctionthatthenetworkisoptimising,somethingdifferentiableandrelatabletothemetricofinterestforthemodel,inthiscaselogarithmiclossusedforclassification. Reply PedroWenner February23,2018at1:27am # HiJason, Firstofallcongratulationsforyourawesomework,IfinallygotthehangofML(hopefully,haha). So,testingsomechangesinthenumberofneuronsandbatchsize/epochs,Iachieved99.87%ofaccuracy. TheparametersIusedwere: #createmodel model=Sequential() model.add(Dense(240,input_dim=8,init=’uniform’,activation=’relu’)) model.add(Dense(160,init=’uniform’,activation=’relu’)) model.add(Dense(1,init=’uniform’,activation=’sigmoid’)) #Compilemodel model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’]) #Fitthemodel model.fit(X,Y,epochs=1500,batch_size=100,verbose=2) AndwhenIrunit,Ialwaysget99,87%ofaccuracy,whichIthinkit’sagoodthing,right?PleasetellmeifIdidsomethingwrongorifthisisafalsepositive. Thankyouinadvanceandsorryforthebadenglish😉 Reply JasonBrownlee February23,2018at12:00pm # thataccuracyisgreat,therewillalwaysbesomeerror. Reply Shiny March2,2018at12:56am # Theaboveexampleisverygoodsir,Iwanttodopricechangepredictionofelectronicsinonlineshoppingproject.Canyougiveanysuggestionsaboutmyproject.Youhadanyexampleofpricepredictionusingneuralnetworkpleasesendalinksir. Reply JasonBrownlee March2,2018at5:33am # Iwouldrecommendfollowingthisprocess: https://machinelearningmastery.com/start-here/#process Reply awaludin March6,2018at12:38am # Hi,veryhelpfulexample.ButIstilldon’tunderstandwhyyouload X=dataset[:,0:8] Y=dataset[:,8] IfIdo X=dataset[:,0:7]itwon’twork Reply JasonBrownlee March6,2018at6:16am # Youcanlearnmoreaboutindexingandslicingnumpyarrayshere: https://machinelearningmastery.com/index-slice-reshape-numpy-arrays-machine-learning-python/ Reply JeongKim March8,2018at1:48pm # Thankyouforthetutorial. Perhaps,someonealreadytoldyouthis.Thedatasetisnolongeravailable. Reply JasonBrownlee March8,2018at2:55pm # Thanksforthenote,I’llfixthatupASAP. Reply WesleyCampbell March9,2018at1:24am # Thanksverymuchfortheconciseexample!Asan“interestedamateur”withmoreexperiencecodingforscientificdatamanipulationthanforsoftwaredevelopment,asimple,high-levelexplanationlikethisoneismuchappreciated.Ifindsometimesthatdocumentationpagescanbeabitlow-levelformyliking,evenwithcodingexperiencemultiplelanguages.ThisarticlewasallIneededtogetstarted,andwasmuchmorehelpfulthanother“officialtutorials.” Reply JasonBrownlee March9,2018at6:24am # Thanks,I’mgladtohearthatWesley. Reply Trung March10,2018at12:55am # Thankyouforyourtutorial,butthedatasetisnotaccessible.Couldyoupleasefixit. Reply JasonBrownlee March10,2018at6:33am # Thanks,I’llfixit. Reply atefeh March16,2018at10:11pm # hello Ihavefoundacodetoconvertingmyimagedatatomnistformat.butIfacetoanerrorbelow. wouldyoupleasehelpme? importos fromPILimportImage fromarrayimport* fromrandomimportshuffle #Loadfromandsaveto Names=[[‘./training-images’,’train’],[‘./test-images’,’test’]] fornameinNames: data_image=array(‘B’) data_label=array(‘B’) FileList=[] fordirnameinos.listdir(name[0])[1:]:#[1:]Excludes.DS_StorefromMacOS path=os.path.join(name[0],dirname) forfilenameinos.listdir(path): iffilename.endswith(“.png”): FileList.append(os.path.join(name[0],dirname,filename)) shuffle(FileList)#Usefullforfurthersegmentingthevalidationset forfilenameinFileList: label=int(filename.split(‘/’)[2]) Im=Image.open(filename) pixel=Im.load() width,height=Im.size forxinrange(0,width): foryinrange(0,height): data_image.append(pixel[y,x]) data_label.append(label)#labelsstart(oneunsignedbyteeach) hexval=“{0:#0{1}x}”.format(len(FileList),6)#numberoffilesinHEX #headerforlabelarray header=array(‘B’) header.extend([0,0,8,1,0,0]) header.append(int(‘0x’+hexval[2:][:2],16)) header.append(int(‘0x’+hexval[2:][2:],16)) data_label=header+data_label #additionalheaderforimagesarray ifmax([width,height])<=256: header.extend([0,0,0,width,0,0,0,height]) else: raiseValueError('Imageexceedsmaximumsize:256×256pixels'); header[3]=3#ChangingMSBforimagedata(0x00000803) data_image=header+data_image output_file=open(name[1]+'-images-idx3-ubyte','wb') data_image.tofile(output_file) output_file.close() output_file=open(name[1]+'-labels-idx1-ubyte','wb') data_label.tofile(output_file) output_file.close() #gzipresultingfiles fornameinNames: os.system('gzip'+name[1]+'-images-idx3-ubyte') os.system('gzip'+name[1]+'-labels-idx1-ubyte') FileNotFoundErrorTraceback(mostrecentcalllast) in() 13 14FileList=[] —>15fordirnameinos.listdir(name[0])[1:]:#[1:]Excludes.DS_StorefromMacOS 16path=os.path.join(name[0],dirname) 17forfilenameinos.listdir(path): FileNotFoundError:[WinError3]Thesystemcannotfindthepathspecified:‘./training-images’ Reply JasonBrownlee March17,2018at8:37am # Lookslikethecodecannotfindyourimages.Perhapschangethepathinthecode? Reply Sayan March17,2018at4:57pm # Thanksalotsir,thiswasaverygoodandintuitivetutorial Reply JasonBrownlee March18,2018at6:01am # Thanks,I’mgladithelped. Reply NikhilGupta March19,2018at11:12pm # Igotapredictionmodelrunningsuccessfullyforfrauddetection.Mydatasetisover50millionandgrowing.Iamseeingapeculiarissue. Whentheloadeddatais10millionorless,MypredictionisOK. AssoonasIload11milliondata,Mypredictionsaturatestoaparticular(say0.48)andkeepsonrepeating.Thatisallpredictionswillbe0.48,irrespectiveoftheinput. Ihavetriedwillmultiplecombinationsofthedensemodel. #createmodel model=Sequential() model.add(Dense(32,input_dim=4,activation=’tanh’)) model.add(Dense(28,activation=’tanh’)) model.add(Dense(24,activation=’tanh’)) model.add(Dense(20,activation=’tanh’)) model.add(Dense(16,activation=’tanh’)) model.add(Dense(12,activation=’tanh’)) model.add(Dense(8,activation=’tanh’)) model.add(Dense(1,activation=’sigmoid’)) Reply JasonBrownlee March20,2018at6:21am # Perhapscheckwhetheryouneedtotrainonalldata,oftenasmallsampleissufficient. Reply NikhilGupta March22,2018at2:45am # Oh.Ibelievethatthemachinelearningaccuracywillimproveaswegetmoredataovertime. Reply ChandraSutrisnoTjhong March28,2018at4:43pm # HI, Howdoyoudefinenumberofhiddenlayersandneuronsperlayer? Reply JasonBrownlee March29,2018at6:30am # Therearenogoodheuristics,trialanderrorisagoodapproach.Discoverwhatworksbestforyourspecificdata. Reply Aravind March30,2018at12:12am # Iexecutedthecodeandgottheoutput,buthowtousethispredictionintheapplication. Reply JasonBrownlee March30,2018at6:39am # Dependsontheapplication. Reply Sabarish March30,2018at12:16am # Whatdoesthevalue1.0and0..0signifies?? Reply JasonBrownlee March30,2018at6:39am # Inwhatcontext? Reply Anand April1,2018at3:51pm # Ifnumberofinputsare8thenwhydidyouuse12neuronsininputlayer?Moreoverwhyisactivationfunctionusedininputlayer? Reply JasonBrownlee April2,2018at5:19am # Thenumberofneuronsinthefirsthiddenlayercanbedifferenttothenumberofneuronsintheinputlayer(e.g.numberofinputfeatures).Theyareonlylooselyrelated. Reply Lia April1,2018at11:49pm # HelloSir, Doestheneuralnetworkuseastandardizedindependentvariablevalues,orshouldwefeeditwithstandardizedonesinthefittingandpredictingstages.Thanks Reply JasonBrownlee April2,2018at5:23am # Trybothandseewhatworksbestforyourspecificpredictivemodelingproblem. Reply MarkLittlewood October27,2021at9:18am # HiIwasplayingwitha2inputdatasetandwhenIhadthefirstlayersetatDense(4itonlyoutputNaNfortheloss.HoweverwhenIreducedthisto3Igotmeaningfullossoutput.IstheresomethingaboutthemaximumDensvalueinrelationtotheinputsthatcausesthis? Reply AdrianTam October27,2021at12:56pm # Thereshouldnotbe.ItismorelikelyduetohowthelayersareinitializedthannumberofneuronsintheDenselayer. Reply tareknahool April4,2018at5:17am # youalwaysfantastic,it’sagreatlesson.But,franklyIdon’tknowwhatisthemeaningof “\n%s:%.2f%%”%andwhyyouusedthenumber(1)inthatcode(model.metrics_names[1],scores[1]*100)) Reply JasonBrownlee April4,2018at6:19am # ThisisPythonstringformatting: https://pyformat.info/ Reply AbhilashMenon April5,2018at6:27am # Dr.Brownlee, Whenwepredict,isitpossibletohavethepredictionsforeachrowinthetestdatasetrightnexttoitinthesamerow.IthoughtofprintingpredictionsandthencopyingitinexcelbutIamnotsureifKeraspreservesorder.Couldyoupleasehelpmeoutwiththisissue?Thankssomuchforallyourhelp! Reply JasonBrownlee April5,2018at3:05pm # Yes,theorderofpredictionsmatchestheorderofinputvalues. Doesthathelp? Reply AndreaGrandi April9,2018at6:37am # IsDeepLearningsomekindof“blackmagic”🙂? Ihadpreviouslyusedscikit-learnandMachineLearningforthesamedataset,tryingtoapplyallthetechniquesIdidlearnbothhereandonbooks,togeta76%accuracy. ItriedthisKerastutorial,usingTensorFlowasbackendandI’mgetting80%accuracyatfirsttryO_o Reply JasonBrownlee April10,2018at6:08am # No,notmagic,justdifferent. Welldonethough! Reply MannyCorrao April11,2018at8:30am # Canyoutellusthecolumnnames?Ithinkthatisimportantbecauseithelpsusunderstandwhatthenetworkisevaluatingandlearningabout. Thanks, Manny Reply JasonBrownlee April11,2018at4:11pm # Yes,theyarelistedhere: https://github.com/jbrownlee/Datasets/blob/master/pima-indians-diabetes.names Reply rachit April11,2018at7:13pm # WhileExecutingversions.py iamgettingthiserror Traceback(mostrecentcalllast): File“versions.py”,line2,in importscipy File“C:\Users\ATITGARG\Anaconda3\lib\site-packages\scipy\__init__.py”,line61,in fromnumpyimportshow_configasshow_numpy_config File“C:\Users\ATITGARG\Anaconda3\lib\site-packages\numpy\__init__.py”,line142,in from.importadd_newdocs File“C:\Users\ATITGARG\Anaconda3\lib\site-packages\numpy\add_newdocs.py”,line13,in fromnumpy.libimportadd_newdoc File“C:\Users\ATITGARG\Anaconda3\lib\site-packages\numpy\lib\__init__.py”,line8,in from.type_checkimport* File“C:\Users\ATITGARG\Anaconda3\lib\site-packages\numpy\lib\type_check.py”,line11,in importnumpy.core.numericas_nx File“C:\Users\ATITGARG\Anaconda3\lib\site-packages\numpy\core\__init__.py”,line74,in fromnumpy.testingimport_numpy_tester File“C:\Users\ATITGARG\Anaconda3\lib\site-packages\numpy\testing\__init__.py”,line12,in from.importdecoratorsasdec File“C:\Users\ATITGARG\Anaconda3\lib\site-packages\numpy\testing\decorators.py”,line6,in from.nose_tools.decoratorsimport* File“C:\Users\ATITGARG\Anaconda3\lib\site-packages\numpy\testing\nose_tools\decorators.py”,line20,in from.utilsimportSkipTest,assert_warns File“C:\Users\ATITGARG\Anaconda3\lib\site-packages\numpy\testing\nose_tools\utils.py”,line15,in fromtempfileimportmkdtemp,mkstemp File“C:\Users\ATITGARG\Anaconda3\lib\tempfile.py”,line45,in fromrandomimportRandomas_Random File“C:\Users\ATITGARG\random.py”,line7,in fromkeras.modelsimportSequential File“C:\Users\ATITGARG\Anaconda3\lib\site-packages\keras\__init__.py”,line3,in from.importutils File“C:\Users\ATITGARG\Anaconda3\lib\site-packages\keras\utils\__init__.py”,line4,in from.importdata_utils File“C:\Users\ATITGARG\Anaconda3\lib\site-packages\keras\utils\data_utils.py”,line23,in fromsix.moves.urllib.errorimportHTTPError ImportError:cannotimportname‘HTTPError’ Reply JasonBrownlee April12,2018at8:35am # Perhapsyouneedtoupdateyourenvironment? Reply Gray April14,2018at4:25am # Jason–veryimpressivework!Evenmoreimpressiveisyourdetailedanswertoeveryquestion.Iwentthroughthemallandgotalotofusefulinformation.Greatjob! Reply JasonBrownlee April14,2018at6:50am # ThanksGray! Reply octdes April14,2018at2:39pm # HelloJason, Thank’sforthegoodtuto! Howwouldyouname/describethestructureofthisneuronalnetwork? Thepointisthatifindstrangethatyoucanhaveadifferentnmberofinputandofneuronesintheinputlayer.Mostoftheneuronalnetworkdiagrammihaveseen,eachinputisdirectlyconnectedwithoneneuroneoftheinputlayer.Ihaveneverseenaneuronalnetworkdiagrammwherethenumberofinputisdifferentwiththenumberofneuronesintheinputlayer. Doyouhavecounterexampleordothereissomethingiunderstandwrong? Thankyouforyourworkandsharingyourknowledge🙂 Reply JasonBrownlee April15,2018at6:24am # Thetypeofneuralnetworkinthispostisamulti-layerperceptronoranMLPforshort. Thefirst“layer”inthecodeactuallydefinesboththeinputlayerandthefirsthiddenlayeratthesametime. Thenumberofinputsmustmatchthenumberofcolumnsintheinputdata.Thenumberofneuronsinthefirsthiddenlayercanbeanythingyouwant. Doesthathelp? Reply Ashley April16,2018at7:29am # ThankyouVERYmuchforthistutorial,Jason!ItisthebestIhavefoundontheinternet.Asapoliticalscientistpursuingcomplexoutcomeslikethisone,Iwaslookingformodelsthatallowformorecomplicatedrelationships.Yourcodeandpostaresoclearlyarticulated;IwasabletoadaptitformypurposesmoreeasilythanIthoughtwouldbepossible.Onepossibleextensionofyourwork,andpossiblythistutorial,wouldbetomapthelayersandnodesontoatheoryofthedatageneratingprocess. Reply JasonBrownlee April16,2018at2:54pm # ThanksAshley,I’mgladithelped. Thanksforthesuggestion. Reply EricMiles April20,2018at1:22am # I’mjuststartingoutworkingthroughyoursite–thanksforthegreatresource!IwantedtopointoutwhatIthinkisatypo:inthecodeblockjustbeforeSection2“DefineModel”IbelievewejustwantX=dataset[:,0:7]sothatwedon’tincludetheoutputvariablesinourinputs. Reply JasonBrownlee April20,2018at6:00am # No,itiscorrectEric. Xwillhave8columns(0-7),theoriginaldatasethas9. YoucanlearnmoreaboutarrayslicingandrangesinPythonhere: https://machinelearningmastery.com/index-slice-reshape-numpy-arrays-machine-learning-python/ Reply Rafa April28,2018at12:50am # Greattutorial,finallyIhavefoundagoodwebaboutdeeplearning(Y) Reply JasonBrownlee April28,2018at5:31am # Thanks. Reply Vivek May7,2018at8:31pm # Greattutorialthankforhelp.IhaveoneprojectinwhichihavetodoCADimages(basically3-dmechanicalimageclassification).canyoupleasegiveroadmaphowcaniproceed? Iamnewandidonthaveanyidea Reply JasonBrownlee May8,2018at6:12am # Thisismygeneralroadmapforapredictivemodelingproblem: https://machinelearningmastery.com/start-here/#process Reply Vivek May9,2018at10:03pm # Thanksalotsir.Thiswillhelpmetoproceed Reply JasonBrownlee May10,2018at6:31am # I’mgladtohearthat. Reply Rahmadars May8,2018at1:36am # Thankssirforthetutorial. Actuallyistillhavesomequestion: 1.Isthisbackpropagationneuralnetwork? 2.Howtoinitializenguyen-widrowrandomweights 3.Ihavemyowndataset,eachconsistof1×64matrix,whichisthecorrectone?Inormalizeeachcolumnofit,oreachrowofit? Thanks. Imtheonewhoaskeduinbackpropagationfromscratchpage Reply JasonBrownlee May8,2018at6:16am # Yes,itusesbackpropgationtoupdatetheweights. Sorry,Idon’tknowaboutthatinitializationmethod,youcanseethesupportedmethodshere: https://keras.io/initializers/ Tryasuiteofdatapreparationschemestoseewhatworksbestforyourspecificdatasetandchosenmodel. Reply Hussein May9,2018at10:33pm # HiJason, Thisisaveryniceintrotoadauntingbutintriguingtechnology!IwantedtoplayaroundwithyourcodeandseeifIcouldcomeupwithsomesimpledatasetandseehowthepredictionswillworkout–oneideathatoccurredtomeis,canImakeamodelthatpredictswhatcountryatelephonenumberbelongsto.Sothetrainingdatasetlookslikea2columnCSV,phonenumberandcountry…that’sbasicallyonefeature.Doyouthinkthiswouldbeeffectiveatall?Whatotherfeaturescouldbeaddedhere?I’llstillgivethisashot,butwouldappreciateanythoughts/ideas! Thanks! Reply JasonBrownlee May10,2018at6:33am # Thecountrycodewouldmakeittoosimpleaproblem–e.g.itcanbesolvedwithalook-uptable. Reply Hussein May10,2018at4:24pm # True,Ijustwantedtoseeifmachinelearningcouldbeusedto“figureout”thelookuptableasopposedtobeprovidedwithonebytheuser,givenenoughdata..notapracticaluse-case,butasalearningexercise.Asitturnsout,mydata-setofabout700phonenumberswasn’teffectiveforthis.Butagain,isthisbecausetheproblemhadtoofewfeatures,i.einmycase,justone?WhatifIincreasedthenumberoffeatures,sayphonenumber,countrycode,citythephonenumberbelongsto,maybeeventhecellphonecompanythenumberisregisteredto,doyouthinkthatwouldmakethetrainingmoreeffective? Reply JasonBrownlee May11,2018at6:33am # Ifyoucanwriteanifstatementorusealook-uptabletosolvetheproblem,thenitmightbeabadfitformachinelearning. Thispostwillhelpyouframeyourproblem: http://machinelearningmastery.com/how-to-define-your-machine-learning-problem/ Reply Hussein May11,2018at5:15pm # ThanksJasonforthatresource.I’llcheckitout.Ialsocameacrossthis(https://elitedatascience.com/machine-learning-projects-for-beginners)thatI’mreadingthrough,foranyoneelsethat’slookingforasmallMLproblemtosolveasalearningexperience. JasonBrownlee May12,2018at6:27am # Great. FrankLu May14,2018at7:44pm # Greattutorialveryhelpful,thenIhaveaquestion.Whichaccountedforthelargestproportionin8inputs?Wehave8factorsinthedatasetlikepregnancies,glucose,bloodpressureandtheothers.So,Whichfactorismostrelatedtodiabetesused?HowdoweknowthisproportionthroughMLP? Thanks! Reply JasonBrownlee May15,2018at7:53am # Wemightnotknow.Thisisthedifferencebetweendescriptiveandpredictivemodels. Thisisreallytheissueofmodelinterpretability,Iwritemoreaboutithere: https://machinelearningmastery.com/faq/single-faq/how-do-i-interpret-the-predictions-from-my-model Reply Paolo May16,2018at7:59pm # HiJason, thanksforyourtutorials. Ihaveaquestion,doyouusekeraswithpandastoo?Inthiscase,itisbettertoimportdatawihnumpyanyway?Whatdoyousuggest? Thankyouagain, Paolo Reply JasonBrownlee May17,2018at6:31am # Yes,andyes. Reply Stefan November10,2018at1:06am # Howso?Iusuallyseepandas.readcsv()toreadfiles.Doeskerasonlyacceptnumpyarrays? Reply JasonBrownlee November10,2018at6:07am # Correct. Reply zohreh May20,2018at9:14am # Thanksforyourgreattutorial.IhaveacreditcarddatasetandIwanttodofrauddetectiononit.ithas312columns,SobeforedoingDNN,Ishoulddodimensionreduction,thenusingDNN?andanotherquestionisthatIsitpossibletodoCNNonmydatasetaswell? Thankyou Reply JasonBrownlee May21,2018at6:24am # Yes,choosethefeaturesthatbestmaptotheoutputvariable. ACNNcanbeusedifthereisaspatialrelationshipinthedata,suchasasequenceoftransactionsoverspaceortime. Reply zohreh May23,2018at6:44am # Thanksforyouranswer,SoIthinkCNNdoesn’tmakesenseformydataset, Doyouhaveanytutorialforactivelearning? thanksforyourtime. Reply JasonBrownlee May23,2018at2:37pm # Idon’tknowifitisappropriate,Iwastryingtoprovideenoughinformationforyoutomakethatcall. Ihopetocoveractivelearninginthefuture. Reply zohreh May24,2018at3:13am # yesIunderstand,Isaidaccordingtoyourprovidedinformation,thankyousomuchforyouranswersandgreattutorials. MiguelGarcía May24,2018at11:55am # Canyoushareatutorialforfirstneuralnetowrkwithmultilabelsupport? Reply JasonBrownlee May24,2018at1:51pm # Thanksforthesuggestion. Reply Sathish May24,2018at12:57pm # howtocreateconvolutionallayersandvisualizefeaturesinkeras Reply JasonBrownlee May24,2018at1:51pm # Goodquestion,sorry,Idon’thaveaworkedexample. Reply Anam May28,2018at3:52am # DearJason, Igetanerror”ValueError:couldnotconvertstringtofloat:“Kindlyhelptosolvetheissue.AndIamusingmyowndatasetwhichconsistoftextnotnumbers(likethedatasetyouhaveused). Thanks! Reply JasonBrownlee May28,2018at6:04am # Thismightgiveyousomeideas: https://machinelearningmastery.com/faq/single-faq/why-does-the-code-in-the-tutorial-not-work-for-me Reply Anam May29,2018at7:26am # DearJason, Iamrunningyourcodeexamplefromsection6.ButIgetanerrorinthefollowingcodesnippet: CodeSnippet: dataset=numpy.loadtxt(“pima_indians.csv”,delimiter=”,”) #splitintoinput(X)andoutput(Y)variables X=dataset[:,0:8] Y=dataset[:,8] Error: ValueError:couldnotconvertstringtofloat:“6 Kindlyguidemetosolvetheissue.Thanksforyourprecioustime. Reply JasonBrownlee May29,2018at2:49pm # I’msorrytohearthat,Ihavesomesuggestionshere: https://machinelearningmastery.com/faq/single-faq/why-does-the-code-in-the-tutorial-not-work-for-me Reply GautamSharma June19,2018at1:20am # DidyoufindanysolutionasIamgettingthesameerror? Reply moti June4,2018at3:34am # HiDoctor,inthispythoncodewhereshallIgetthe“keras”package? Reply JasonBrownlee June4,2018at6:34am # ThistutorialshowsyouhowtoinstallKeras: https://machinelearningmastery.com/setup-python-environment-machine-learning-deep-learning-anaconda/ Reply AmmaraHabib June5,2018at5:13am # Hyjason,Thanksforanamazingpost.Ihaveaquestionherethatcanweusedenselayerasinputfortextclassification(e.g:sentimentclassificationofmoviereviews).Ifyesthanhowcanweconvertthetextdatasetintonumericfordenselayer. Reply JasonBrownlee June5,2018at6:47am # Youcan,althoughitiscommontoonehotencodethetextoruseanembeddinglayer. Ihaveexamplesofbothontheblog. Reply AmmaraHabib June5,2018at9:18am # Thanksforyourprecioustime.Sir,youmeanthatfirstiuseembeddinglayerasinputlayerandtheniusedenselayerasthehiddenlayer? Reply JasonBrownlee June5,2018at3:05pm # Yes. Reply LisaXie June15,2018at1:12pm # Hi,thanksforyourtutorial.Iamwonderinghowyousetthenumberneuronsandactivationfunctionsforeachlayer,eg.12neuronsforthe1stlayerand8forthesecond. Reply JasonBrownlee June15,2018at2:50pm # Iusedalittletrialanderror. Reply Marwa June18,2018at1:25am # Hijason, IdeveloppedtwoneuralnetworksusingkerasbutIhavethiserror: line1336,in_do_call raisetype(e)(node_def,op,message) ResourceExhaustedError:OOMwhenallocatingtensorwithshape[7082368,50] [[Node:training_1/Adam/Variable_14/Assign=Assign[T=DT_FLOAT,_class=[“loc:@training_1/Adam/Variable_14″],use_locking=true,validate_shape=true,_device=”/job:localhost/replica:0/task:0/device:GPU:0”](training_1/Adam/Variable_14,training_1/Adam/zeros_14)]] Haveyouanidea? Thanks. Reply JasonBrownlee June18,2018at6:42am # Sorry,Ihavenotseenthiserrorbefore.Perhapstryposting/searchingonstackoverflow? Reply prateekbhadauria June23,2018at11:38pm # sirihavearegressionrelateddatasetwhichcontainsanarrayof49999rowsand20coloumns,iwanttoimplementCNNonthisdataset, iputmycodeaspermyperceptionkindlygivemesuggestion,tocorrectitiwasstuckmainlybyputtingmydensedimensionspecially fromkeras.modelsimportSequential fromkeras.layersimportDense importnumpyasnp importtensorflowastf frommatplotlibimportpyplot fromsklearn.datasetsimportmake_regression fromsklearn.preprocessingimportMinMaxScaler fromsklearn.metricsimportmean_squared_error fromkeras.wrappers.scikit_learnimportKerasRegressor fromsklearn.preprocessingimportStandardScaler fromkeras.layersimportDense,Dropout,Flatten fromkeras.layersimportConv2D,MaxPooling2D fromkeras.optimizersimportSGD seed=7 np.random.seed(seed) fromscipy.ioimportloadmat dataset=loadmat(‘matlab2.mat’) Bx=basantix[:,50001:99999] Bx=np.transpose(Bx) Fx=fx[:,50001:99999] Fx=np.transpose(Fx) fromsklearn.cross_validationimporttrain_test_split Bx_train,Bx_test,Fx_train,Fx_test=train_test_split(Bx,Fx,test_size=0.2,random_state=0) scaler=StandardScaler()#ClassiscreateasScaler scaler.fit(Bx_train)#Thenobjectiscreatedortofitthedataintoit Bx_train=scaler.transform(Bx_train) Bx_test=scaler.transform(Bx_test) model=Sequential() defbase_model(): keras.layers.Dense(Dense(49999,input_shape=(20,),activation=’relu’)) model.add(Dense(20)) model.add(Dense(49998,init=’normal’,activation=’relu’)) model.add(Dense(49998,init=’normal’)) model.compile(loss=’mean_squared_error’,optimizer=‘adam’) returnmodel scale=StandardScaler() Bx=scale.fit_transform(Bx) Bx=scale.fit_transform(Bx) clf=KerasRegressor(build_fn=base_model,nb_epoch=100,batch_size=5,verbose=0) clf.fit(Bx,Fx) res=clf.predict(Bx) ##linebelowthrowsanerror clf.score(Fx,res) Reply JasonBrownlee June24,2018at7:33am # Sorry,Icannotdebugyourcodeforyou.Perhapspostyourcodeanderrortostackoverflow? Reply MadhavPrakash June24,2018at3:01am # HiJason, Lookingatthedataset,Icouldfindthatthereweremanyattributeswitheachofthemdifferingintermsofunits.Whyhaven’tyourescaled/normalisedthedata?butstillmanagedtogetanaccuracyof75%? Reply JasonBrownlee June24,2018at7:35am # Ideally,weshouldrescalethedata. Thereluactivationfunctionismoreflexiblewithunscaleddata. Reply MadhavPrakash June24,2018at4:23pm # Ohkay,thanks. Also,I’veimplementedaNNonadatabasesimilartothis,wheretheaccuracyvariesb/w70-75%.I’vetriedtoincreasetheaccuracybytuningvariousparametersandfunctions(learningrate,no.oflayers,neuronsperlevel,earlystopping,activationfn,initialization,optimizeretc…)butitwasnotasuccess.Myquestioniswhendoicometoknowthati’vereachedthemaximumaccuracypossibleformyimplementation?Doistaycontentwiththecurrentaccuracy? Reply JasonBrownlee June25,2018at6:19am # Whenwerunoutoftimeorideas. Ilistsomemoreideashere: http://machinelearningmastery.com/machine-learning-performance-improvement-cheat-sheet/ Andhere: http://machinelearningmastery.com/improve-deep-learning-performance/ Reply AarronWilson July8,2018at8:19am # Firstofallthanksforthetutorial.AlsoIacknowledgethatthisnetworkismoreforeducationalpurposes.Yetthisnetworkcanbeimprovedto83-84%accuracywithstandardnormalizationalone.Alsoitcanhit93-95%accuracybyusingadeepermodel. #Standardnormalization X=StandardScaler().fit_transform(X) #andadeepermodel model=Sequential() model.add(Dense(12,input_dim=8,activation=’relu’)) model.add(Dense(12,activation=’relu’)) model.add(Dense(12,activation=’relu’)) model.add(Dense(12,activation=’relu’)) model.add(Dense(12,activation=’relu’)) model.add(Dense(8,activation=’relu’)) model.add(Dense(1,activation=’sigmoid’)) Reply JasonBrownlee July9,2018at6:30am # Thanks,yes,normalizationisagoodideaingeneralwhenworkingwithneuralnets. Reply Alex July10,2018at3:47am # Hi,thankyouforthisgreatarticle Imaginethatinmydatasetinsteadofdiabetesbeinga0or1Ihave3results,Imean,thedatarowsarelikethis data1,data2,sickness 123,124,0 142,541,0 156,418,1 142,541,1 156,418,2 So,Ineedtocategorizefor3values,IfIusethissameexampleyougaveushowcanIdeterminetheoutput? Reply JasonBrownlee July10,2018at6:51am # TheoutputwillbesicknessAlex.PerhapsIdon’tunderstandyourquestion? Reply Alex July10,2018at7:11am # Theoutputwillbesicknessyes Reply Alex July10,2018at10:17am # SorryformyEnglish,itisnotmynataltongue,Iwillredomyquesyion.WhatImeanisthis,Iwillbehavingalabelwithmorethan2results,0isonesickness,1willbeotherand2willbeother. HowcanIusethemodelyoushowedustofitthe3results? Reply JasonBrownlee July10,2018at2:26pm # Isee,thisiscalledamulti-classclassificationproblem. Thistutorialwillhelp: https://machinelearningmastery.com/multi-class-classification-tutorial-keras-deep-learning-library/ Reply adsad July11,2018at1:06am # isitpossibletopredictthelotteryoutcome.ifsohow? Reply JasonBrownlee July11,2018at5:59am # No.Iexplainmorehere: https://machinelearningmastery.com/faq/single-faq/can-i-use-machine-learning-to-predict-the-lottery Reply Tom July14,2018at2:32am # HiJason,Irunyourfirstexamplecodeinthistutorial.butwhatmakesmeconfusedis: Whythefinaltrainingaccuracy(0.7656)isdifferentfromtheevaluatedscores(78.26%)inthesamedatasets(trainingset)?Ican’tfigureitout.Canyoutellmeplease?Thanksalot! Epoch150/150 768/768[==============================]–0s–loss:0.4827–acc:0.7656 32/768[>………………………..]–ETA:0s acc:78.26% Reply JasonBrownlee July14,2018at6:20am # Oneistheperformanceonthetrainingset,theotheronthevalidationset. Youcanlearnmoreaboutthedifferencehere: https://machinelearningmastery.com/difference-test-validation-datasets/ Reply Tom July14,2018at9:09pm # Thanksfortherapidreply.ButInoticedthatinyourcodethetrainingsetandvalidationsetareexactlythesamedataset.Pleasecheckitforconfirmation.Thecodeisinthepart“6.TieItAllTogether”. #Fitthemodel model.fit(X,Y,epochs=150,batch_size=10) #evaluatethemodel scores=model.evaluate(X,Y) So,myproblemisstillthesame:Whythefinaltrainingaccuracy(0.7656)isdifferentfromtheevaluatedscores(78.26%)inthesamedatasets? Thanks! Reply JasonBrownlee July15,2018at6:14am # Perhapsverboseoutputmightbeaccumulatedovereachbatchratherthansummarizingskillattheendofthetrainingepoch. Reply ami July16,2018at2:01am # HelloJason, DoyouhavesometutorialonsignalprocessingusingCNN?IhavecsvfilesofsomebiomedicalsignalslikeECGandiwanttoclassifynormalandabnormalsignalsusingdeeplearning. WithRegards Reply JasonBrownlee July16,2018at6:11am # Yes,Ihaveasuiteoftutorialsscheduledonthistopic.Theyshouldbeoutsoon. Reply EL July16,2018at7:19pm # Hi,thankyousomuchforyourtutorial.Iamtryingtomakeaneuralnetworkthatwilltakeadatasetandreturnifitissuitabletobeanalyzedbyanotherprogramihave.Isitpossibletofeedthiswithacceptabledatasetsandunacceptabledatasetsandthencallitonanewdatasetandthenreturnwhetherthisdatasetisacceptable?Thankyouforyourhelp,Iamverynewtomachinelearning. Reply JasonBrownlee July17,2018at6:14am # Tryitandseehowyougo. Reply ami July18,2018at2:37pm # Ohreally!Thankyousomuch.Canyoupleasenotifymewhenthetutorialswillbeoutbecauseiamdoingaprojectandiamstuckrightnow. WithRegards Reply Diagrams July30,2018at2:45pm # Itwouldbeveryveryhelpfulfornewcomersifyouhadadiagramofthenetwork,showingindividualnodesandgraphedges(andbiasnodesandactivationfunctions),andindicatingonitwhichpartsweregeneratedbywhichmodel.addcommands/parameters.Similartohttps://zhu45.org/posts/2017/May/25/draw-a-neural-network-through-graphviz/ I’vetriedvisualizingitwithfromkeras.utils.plot_modelandtensorboard,butneitherproduceanode-leveldiagram. Reply JasonBrownlee July31,2018at5:58am # Thanksforthesuggestion. Reply Aravind July30,2018at7:57pm # cananyonetellasimplewaytorunmyannkerastensorflowbackendinGPU.Thanks Reply JasonBrownlee July31,2018at6:00am # ThesimplestwayIknowhow: https://machinelearningmastery.com/develop-evaluate-large-deep-learning-models-keras-amazon-web-services/ Reply farli August6,2018at1:08pm # Didyouusebackpropagationhere? Reply JasonBrownlee August6,2018at2:54pm # Yes. Reply farli August13,2018at9:40am # Canyoupleasemakeatutorialonconvolutionalneuralnet?Thatwouldbereallyhelpful..:) Reply JasonBrownlee August13,2018at2:27pm # Yes,ihavemanyontheblogalready.Trytheblogsearch. Reply KarimGamal August7,2018at8:52pm # IhaveaproblemwhereIgettheresultasshownbelow Epoch146/150–0s–loss:-1.2037e+03–acc:0.0000e+00 Epoch147/150–0s–loss:-1.2037e+03–acc:0.0000e+00 Epoch148/150–0s–loss:-1.2037e+03–acc:0.0000e+00 Epoch149/150–0s–loss:-1.2037e+03–acc:0.0000e+00 Epoch150/150–0s–loss:-1.2037e+03–acc:0.0000e+00 whereinmydatasettheoutputisavaluebetween0to500notonly0and1 sohowcanIfixthisinmycode Reply JasonBrownlee August8,2018at6:18am # Soundslikearegressionproblem.Changetheactivationfunctionintheoutputlayertolinearandthelossfunctionto‘mse’. Seethistutorial: https://machinelearningmastery.com/regression-tutorial-keras-deep-learning-library-python/ Reply Tim August15,2018at5:54am # AWESOME!!!Thankssomuchforthis. Reply JasonBrownlee August15,2018at6:11am # You’rewelcome,I’mhappyithelped. Reply tania August27,2018at8:35pm # HiJason, Thankyouforthetutorial.IamrelativelynewtoMLandIamcurrentlyworkingonaclassificationproblemthatisnonbinary. Mydatasetconsistsofanumberoflabeledsamples–allmeasuringthesamequantity/unit.Theamounttypicallyrangesfrom10to20labeledsamples/inputs.However,thefeedforwardortestingsamplewillonlycontain7ofthoseinputs(atrandom). I’mstrugglingtofindasolutiontodesigningasystemthatacceptsfewerinputsthanwhatistypicallyfoundinthetrainingset. Reply JasonBrownlee August28,2018at5:59am # Perhapstryfollowingthisprocess: https://machinelearningmastery.com/start-here/#process Reply VaibhavJaiswal September10,2018at6:28pm # Greattutorialthere!Butthemainaspectofthemodelistopredictonasample.Ifiprintthefirstpredictedvalue,itshowsmesomevaluesforallthecolumnsofcategoricalfeatures.Howtogetthepredictednumberfromthesample? Reply JasonBrownlee September11,2018at6:26am # Theorderofthepredictionsmatchestheorderoftheinputs. Reply Glen September19,2018at10:45pm # IthinkImustbedoingsomethingwrong,Ikeepgettingtheerror: File“C:\Users\glens\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py”,line519,in__exit__ c_api.TF_GetCode(self.status.status)) InvalidArgumentError:Inputtoreshapeisatensorwith10values,buttherequestedshapehas1 [[Node:training_19/Adam/gradients/loss_21/dense_64_loss/Mean_1_grad/Reshape=Reshape[T=DT_FLOAT,Tshape=DT_INT32,_class=[“loc:@training_19/Adam/gradients/loss_21/dense_64_loss/Mean_1_grad/truediv”],_device=”/job:localhost/replica:0/task:0/device:GPU:0″](training_19/Adam/gradients/loss_21/dense_64_loss/mul_grad/Sum,training_19/Adam/gradients/loss_21/dense_64_loss/Mean_1_grad/DynamicStitch/_1703)]] AreyouabletoshedanylightonwhyIwouldgetthiserror? Thankyou Reply JasonBrownlee September20,2018at7:59am # Ihavenotseenthiserror,Ihavesomesuggestionshere: https://machinelearningmastery.com/faq/single-faq/why-does-the-code-in-the-tutorial-not-work-for-me Reply Snehasish September19,2018at11:15pm # HiJason,thanksforthisawesometutorial.Ihaveonedoubt–whydidtheevaluationnotproduce100%accuracy?Afterall,weusedthesamedatasetforevaluationastheoneusedfortrainingitself. Reply JasonBrownlee September20,2018at8:00am # Goodquestion! Weareapproximatingachallengingmappingfunction,notmemorizingexamples.Assuch,therewillalwaysbeerror. Iexplainmorehere: https://machinelearningmastery.com/faq/single-faq/why-cant-i-get-100-accuracy-or-zero-error-with-my-model Reply MarkC September27,2018at12:49am # Howdoyoupredictsomethingyouwanttopredictsuchasnewdata.forexampleIdidaspamdetectionbutdontknowhowtopredictwhetherasentenceiwriteisspamornot. Reply JasonBrownlee September27,2018at6:01am # Youcancallmodel.predict()withafinalizedmodel.Morehere: https://machinelearningmastery.com/faq/single-faq/how-do-i-make-predictions Reply Vivek October1,2018at3:17am # HelloSir, Iamnewandunderstoodsomepartofyourcode.Ihavequestioninpredictionmodelbasicallywedivideourdataintotrainingandtestset.Intheexampleabovetheentiredatasetisusedastrainingdataset.Howcanwetrainthemodelontrainingsetuseitforthepredictionontestset? Reply JasonBrownlee October1,2018at6:28am # Greatquestion,yes,trainthemodelonallavailabledataandthenuseittostartmakingpredictions. Morehere: https://machinelearningmastery.com/train-final-machine-learning-model/ Reply Vivek35 October1,2018at7:11am # HelloSir, It’sgreattutorialtounderstand.However,Iamnewandwanttounderstandsomethingoutofit.Intheabovecodewehavetreatedentiredatasetastrainingset.Canwedividethisintotrainingsetandtestset,applymodeltotrainingsetanduseitfortestsetprediction.Howcanweachievewiththeabovecode? Reply JasonBrownlee October1,2018at2:39pm # Thanks. Yes,youcansplitthedatasetmanuallyorusescikit-learntomakethesplitforyou.Iexplainmorehere: https://machinelearningmastery.com/faq/single-faq/how-do-i-evaluate-a-machine-learning-algorithm Reply Lipi October5,2018at6:26am # HiJason, Iamtryingtopredictusingmyneuralnetwork.IhaveusedMinMaxScalerinthefeatureswhiletrainingthedata.Idon’tgetagoodpredictionifIdon’tusethesametransformfunctiononthepredictiondatasetwhichIusedonthefeatureswhiletrainingthedata.Couldyousuggestmethecorrectapproachinthissituation? Reply JasonBrownlee October5,2018at2:29pm # Youmustusethesametransformtobothpreparetrainingdataandtomakepredictionsonnewdata. Reply Lipi October5,2018at10:12pm # Thankyou! Reply neenu October6,2018at3:57pm # hiiamnewtothisiwritewfollowingcodeinspyder fromkeras.modelsimportSequential fromkeras.layersimportDense importnumpy #fixrandomseedforreproducibility numpy.random.seed(7) #loadpimaindiansdataset dataset=numpy.loadtxt(“pima-indians-diabetes.txt”,encoding=”UTF8″,delimiter=”,”) #splitintoinput(X)andoutput(Y)variables X=dataset[:,0:8] Y=dataset[:,8] #createmodel model=Sequential() model.add(Dense(12,input_dim=8,activation=’relu’)) model.add(Dense(8,activation=’relu’)) model.add(Dense(1,activation=’sigmoid’)) #Compilemodel model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’]) #Fitthemodel model.fit(X,Y,epochs=150,batch_size=10) #evaluatethemodel scores=model.evaluate(X,Y) print(“\n%s:%.2f%%”%(model.metrics_names[1],scores[1]*100)) Andigotthisasoutput runfile(‘C:/Users/DELL/Anaconda3/Scripts/temp.py’,wdir=’C:/Users/DELL/Anaconda3/Scripts’) UsingTensorFlowbackend. Traceback(mostrecentcalllast): File“”,line1,in runfile(‘C:/Users/DELL/Anaconda3/Scripts/temp.py’,wdir=’C:/Users/DELL/Anaconda3/Scripts’) File“C:\Users\DELL\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py”,line668,inrunfile execfile(filename,namespace) File“C:\Users\DELL\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py”,line108,inexecfile exec(compile(f.read(),filename,‘exec’),namespace) File“C:/Users/DELL/Anaconda3/Scripts/temp.py”,line1,in fromkeras.modelsimportSequential File“C:\Users\DELL\Anaconda3\lib\site-packages\keras\__init__.py”,line3,in from.importutils File“C:\Users\DELL\Anaconda3\lib\site-packages\keras\utils\__init__.py”,line6,in from.importconv_utils File“C:\Users\DELL\Anaconda3\lib\site-packages\keras\utils\conv_utils.py”,line9,in from..importbackendasK File“C:\Users\DELL\Anaconda3\lib\site-packages\keras\backend\__init__.py”,line89,in from.tensorflow_backendimport* File“C:\Users\DELL\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py”,line5,in importtensorflowastf File“C:\Users\DELL\Anaconda3\lib\site-packages\tensorflow\__init__.py”,line22,in fromtensorflow.pythonimportpywrap_tensorflow#pylint:disable=unused-import File“C:\Users\DELL\Anaconda3\lib\site-packages\tensorflow\python\__init__.py”,line49,in fromtensorflow.pythonimportpywrap_tensorflow File“C:\Users\DELL\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py”,line74,in raiseImportError(msg) ImportError:Traceback(mostrecentcalllast): File“C:\Users\DELL\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py”,line14,inswig_import_helper returnimportlib.import_module(mname) File“C:\Users\DELL\Anaconda3\lib\importlib\__init__.py”,line126,inimport_module return_bootstrap._gcd_import(name[level:],package,level) File“”,line994,in_gcd_import File“”,line971,in_find_and_load File“”,line955,in_find_and_load_unlocked File“”,line658,in_load_unlocked File“”,line571,inmodule_from_spec File“”,line922,increate_module File“”,line219,in_call_with_frames_removed ImportError:DLLloadfailedwitherrorcode-1073741795 Duringhandlingoftheaboveexception,anotherexceptionoccurred: Traceback(mostrecentcalllast): File“C:\Users\DELL\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py”,line58,in fromtensorflow.python.pywrap_tensorflow_internalimport* File“C:\Users\DELL\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py”,line17,in _pywrap_tensorflow_internal=swig_import_helper() File“C:\Users\DELL\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py”,line16,inswig_import_helper returnimportlib.import_module(‘_pywrap_tensorflow_internal’) File“C:\Users\DELL\Anaconda3\lib\importlib\__init__.py”,line126,inimport_module return_bootstrap._gcd_import(name[level:],package,level) ModuleNotFoundError:Nomodulenamed‘_pywrap_tensorflow_internal’ FailedtoloadthenativeTensorFlowruntime. Seehttps://www.tensorflow.org/install/install_sources#common_installation_problems forsomecommonreasonsandsolutions.Includetheentirestacktrace abovethiserrormessagewhenaskingforhelp. Reply JasonBrownlee October7,2018at7:24am # Irecommendthistutorialtohelpyousetupyourenvironment: https://machinelearningmastery.com/setup-python-environment-machine-learning-deep-learning-anaconda/ Irecommendthatyoudon’tuseamIDEornotebook: https://machinelearningmastery.com/faq/single-faq/why-dont-use-or-recommend-notebooks Instead,Irecommendyousavecodetoa.pyfileandrunfromthecommandline: https://machinelearningmastery.com/faq/single-faq/how-do-i-run-a-script-from-the-command-line Reply kamal October15,2018at1:08am # sirpleaseprovidethepythoncodeforadaptiveneurofuzzyclassifier Reply JasonBrownlee October15,2018at7:31am # Thanksforthesuggestion. Reply RajanKumar June29,2021at3:44pm # Iamwaitingtooforit. Reply Shahbaz October24,2018at4:44am # blessedonusir, canugivemeideaaboutOCRsystem,formyfinalyearproject,plzgivemeback-endstratigyforOCR,ruhaveanycodeonOCR Reply JasonBrownlee October24,2018at6:32am # Perhapsstarthere: http://machinelearningmastery.com/handwritten-digit-recognition-using-convolutional-neural-networks-python-keras/ Reply AndrewAgib October29,2018at10:39pm # model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’]) showasyntaxerroronthatsentencewhatcouldbethereason Reply JasonBrownlee October30,2018at6:02am # Ihavesomesuggestionshere: https://machinelearningmastery.com/faq/single-faq/why-does-the-code-in-the-tutorial-not-work-for-me Reply VASUDEVKP November3,2018at10:13pm # HelloJason, Ihavethetheanobackendinstalled.IamusingWindowsOSandduringexecutionIamgettinganerror“NomodulenamedTensorFlow”.Pleasehelp Reply JasonBrownlee November4,2018at6:27am # YoumayhavetochangetheconfigurationofKerastouseTheanoinstead. Moredetailshere: https://keras.io/backend/ Reply ImenDrs November4,2018at7:09am # HiJason, Please,howcanwecalculatetheprecisionandrecallofthisexample? Andthanks. Reply JasonBrownlee November5,2018at6:06am # Youcanusescikit-learnmetrics: http://scikit-learn.org/stable/modules/classes.html#sklearn-metrics-metrics Reply Stefan November10,2018at2:59am # Ithoughtsigmoidandsoftmaxwerequitesimilaractivationfunctions.Butwhentryingthesamemodelwithsoftmaxasactivationforthelastlayerinsteadofsigmoid,myaccuracyismuchmuchworse. Doesthatmakesensetoyou?Ifsowhy?IfeellikeIseesoftmaxmoreofteninothercodethansigmoid. Reply JasonBrownlee November10,2018at6:09am # Nope. Sigmoidfor2classes. Softmaxfor>2classes Reply AmudaKamorudeen November10,2018at4:46pm # I’mworkingonmodelthatwillpredictpropensityofcustomerthatarelikelytoterminatetheirservicewithcompany.Ihavedatasetof70000rowsand500columns,PleasehowcanIpassnumericdataasaninputtoaconvolutionalneuralnetwork(CNN). Reply JasonBrownlee November11,2018at5:59am # CNNsareonlyappropriatefordatawithaspatialrelationship,suchasimages,timeseriesandtext. Reply irfan November18,2018at3:22pm # hijason, iamusingtensorflowasbackend. fromkeras.modelsimportSequential fromkeras.layersimportDense importsys fromkerasimportlayers fromkeras.utilsimportplot_model print(model.layer()) erro. ————————————————————————— AttributeErrorTraceback(mostrecentcalllast) in 9model.add(Dense(512,activation=’relu’)) 10model.add(Dense(10,activation=’sigmoid’)) —>11print(model.layer()) 12#Compilemodel 13model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’]) AttributeError:‘Sequential’objecthasnoattribute‘layer’ Reply JasonBrownlee November19,2018at6:44am # Whyareyoutryingtoprintmodel.layer()? Reply Mario December2,2018at5:30am # HiJason Firstthanksforamazingtutorial,sinceyourscriptsareusinglistofvalueswhilemyinputsarelistof24×20matriceswhicharefilledoutbyvaluesinespecialorderhowtheymeasuredfor3parametersin3000cycles,howcanIfeedthistypematrice-dataorlet’ssayhowcanIfeedstreamofimagesfor3differentparametersIalreadyextractedfromrawdatasetandafterpreprocessingIconvertthemto24*20matricesor.pngimages?HowshouldIchangethisscriptsothatIcanusemydataset? Reply JasonBrownlee December2,2018at6:26am # WhenusinganMLPwithimages,youmustflatteneachmatrixofpixeldatatoasinglerowvector. Reply EvangelosArgyropoulos December18,2018at6:15am # HiJason, Thankfortutorial.1questions. Iusethealgorithmfortimeseriesprediction0=buy1=sell.Doesthismodeloverfit? Reply JasonBrownlee December18,2018at6:27am # Youcanonlyknowifyoutryfittingitandevaluatinglearningcurvesontrainandvalidationdatasets. Reply SOURAVMONDAL December28,2018at7:42am # GreattutorialSir. Isthereawaytovisualizedifferentlayerswiththeirnodesandinterconnectionsamongthem,ofamodelcreatedinkeras(imeanthebasicstructureofaneuralnetworkwithlayersofnodesandinterconnectionsamongthem). Reply JasonBrownlee December29,2018at5:46am # Yes,checkoutthistutorial: https://machinelearningmastery.com/visualize-deep-learning-neural-network-model-keras/ Reply ImenDrs December28,2018at11:29pm # Thanksforthistutorial. Ihaveaproblemwhenitrytocompileandfitmymodel.Itreturnvalueerror:ValueError:couldnotconvertstringtofloat:’24,26,99,31,623,863,77,32,362,998,1315,33,291,14123,39,8,335,2308,349,403,409,1250,417,47,1945,50,188,51,4493,3343,13419,6107,84,18292,339,9655,22498,1871,782,1276,2328,56,17633,24004,24236,1901,6112,22506,26397,816,502,352,24238,18330,7285,2160,220,511,17680,68,5137,26398,875,542,354,2045,555,2145,93,327,26399,3158,7501,26400,8215′. Canyouhelpmeplease. Reply JasonBrownlee December29,2018at5:52am # Perhapsyourdatacontainsastring? Reply ImenDrs December29,2018at7:59am # Thedatacontains”user,number_of_followers,list_of_followers,number_of_followee,list_of_followee,number_of_mentions,list_of_user_mentioned…” thevaluesinthelistareseparatedbycommas. Forexample:“36;3;52,3,87;5;63,785,22,11,6;0;“ Reply Somashekhar January2,2019at4:39am # Hi,Isthereasolutionpostedforsolvingpima-indians-diabetes.csvforpredictionusingLSTM? Reply JasonBrownlee January2,2019at6:42am # No.LSTMsareforsequentialdataonly,andthepimaindiansdatasetisnotasequencepredictionproblem. Reply ImenDrs January4,2019at9:56pm # Isthereawaytousespecificfieldsinthedatasetinsteadoftheentireuploadeddataset. Andthanks. Reply JasonBrownlee January5,2019at6:56am # Yes,fieldsarecolumnsinthedatasetmatrixandyoucanremovethosecolumnsthatyoudonotwanttouseasinputstoyourmodel. Reply Kahina January5,2019at12:43am # Thankyousomuch!It’shelpful Reply JasonBrownlee January5,2019at6:58am # I’mhappytohearthatitwashelpful. Reply Khemmarut January12,2019at11:35pm # Traceback(mostrecentcalllast): File“C:/Users/Admin/PycharmProjects/NN/nnt.py”,line119,in rounded=[round(X[:1])forxinpredictions] File“C:/Users/Admin/PycharmProjects/NN/nnt.py”,line119,in rounded=[round(X[:1])forxinpredictions] TypeError:typenumpy.ndarraydoesn’tdefine__round__method Helpmeplease Thankyou. Reply JasonBrownlee January13,2019at5:41am # Perhapsensurethatyourlibrariesareuptodate? Thismighthelp: https://machinelearningmastery.com/setup-python-environment-machine-learning-deep-learning-anaconda/ Reply PritiPachpande January31,2019at2:50am # HiJason, Thankyoufortheamazingtutorial.Iamtryingtobuildanautoencodermodelinkerasusingbackendtensorflow. Ineedtousetensorflow(liketf.ifft,tf.fft)functionsinthemodel.CanyouguidemetowardshowcanIdoit?ItriedusinglambdalayerbuttheaccuracydecreaseswhenIuseit. Also,Imusingmodel.predict()functiontocheckthevaluesbetweentheintermediatelayers.AmIdoingitright? Also,canyouguidemetowardshowtousereshapefunctioninkeras? Thanksforyourhelp Reply JasonBrownlee January31,2019at5:36am # Sorry,Idon’tknowaboutthefunctionsyouareusing.Perhapspostonstackoverflow? Reply Crawford January31,2019at9:34pm # HiJason, Yourtutorialsarebrilliant,thanksforputtingallthistogether. Inthistutorialtheresultiseithera1or0,butwhatifyouhavedatawithmorethantwopossibleresults,e.g.0,1,2,orsimilar? CanIdosomethingwiththecodeyouhavepresentedhere,orisawholeotherapproachrequired? IhavesomewhatachievedwhatI’mtryingtodousingyour“firstmachinelearningproject”usingaknnmodel,butIhadtosimplifymydatabystrippingoutsomevariables.Ibelievethereisvalueintheseextravariables,sothoughttheneuralnetworkmightbeuseful,butlikeIsaidIhavethreeclassificationsnottwo. Thanks. Reply JasonBrownlee February1,2019at5:37am # Yes,hereisanexampleofamulti-classclassificationwithaneuralnet: https://machinelearningmastery.com/multi-class-classification-tutorial-keras-deep-learning-library/ Reply Crawford February1,2019at10:11pm # Brilliant,thanks. Reply Sergio February1,2019at10:18am # Hi,Imtryingtoconstructaneuralnetworkusingcomplexnumberasinputs,Ifollowedyourrecommendatinsbutigetthefollowingwarning: ` ComplexWarning:Castingcomplexvaluestorealdiscardstheimaginarypartreturnarray(a,dtype,copy=False,order=order) Thecoderunwithoutproblems,butthepredictionsis25%exact. Ispossibletousecomplexnumberinneuralnetworks..? Douhavesomeadvices? Reply JasonBrownlee February1,2019at11:06am # Idon’tthinktheKerasAPIsupportscomplexnumbersasinput. Reply Sergio February1,2019at2:17pm # Douhaveanysuggestiontodealwithcomplexnumbers? Reply JasonBrownlee February2,2019at6:06am # Notoffhand,sorry. PerhapsposttotheKerasusersgrouptoseeifanyonehastriedthisbefore: https://machinelearningmastery.com/get-help-with-keras/ Reply ArnabKumarMishra February1,2019at9:47pm # HiJason, Iamtryingtorunthecodeinthetutorialwithsomeminormodifications,butIamfacingaproblemwiththetraining. Thetraininglossandaccuracybotharestayingthesameacrossepochs(Pleasetakealookatthecodesnippetandtheoutputbelow).Thisisforadifferentdataset,notthediabetesdataset. Ihavetriedtosolvethisproblemusingthesuggestionsgiveninhttps://stackoverflow.com/questions/37213388/keras-accuracy-does-not-change Buttheproblemisstillthere. Canyoupleasetakealookatthisandhelpmesolvethisproblem?Thanks. CODEandOUTPUTSnippets: #createmodel model=Sequential() model.add(Dense(15,input_dim=9,activation=’relu’)) model.add(Dense(10,activation=’relu’)) model.add(Dense(5,activation=’relu’)) model.add(Dense(1,activation=’sigmoid’)) #compilemodel model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’]) #Fitthemodel model.fit(xTrain,yTrain,epochs=500,batch_size=10) Epoch1/200 81/81[==============================]–0s177us/step–loss:-8.4632–acc:0.4691 Epoch2/200 81/81[==============================]–0s148us/step–loss:-8.4632–acc:0.4691 Epoch3/200 81/81[==============================]–0s95us/step–loss:-8.4632–acc:0.4691 Epoch4/200 81/81[==============================]–0s116us/step–loss:-8.4632–acc:0.4691 Epoch5/200 81/81[==============================]–0s106us/step–loss:-8.4632–acc:0.4691 Epoch6/200 81/81[==============================]–0s98us/step–loss:-8.4632–acc:0.4691 Epoch7/200 81/81[==============================]–0s145us/step–loss:-8.4632–acc:0.4691 Epoch8/200 81/81[==============================]–0s138us/step–loss:-8.4632–acc:0.4691 Epoch9/200 81/81[==============================]–0s105us/step–loss:-8.4632–acc:0.4691 Epoch10/200 81/81[==============================]–0s128us/step–loss:-8.4632–acc:0.4691 Epoch11/200 81/81[==============================]–0s129us/step–loss:-8.4632–acc:0.4691 Epoch12/200 81/81[==============================]–0s111us/step–loss:-8.4632–acc:0.4691 Epoch13/200 81/81[==============================]–0s106us/step–loss:-8.4632–acc:0.4691 Epoch14/200 81/81[==============================]–0s144us/step–loss:-8.4632–acc:0.4691 Epoch15/200 81/81[==============================]–0s106us/step–loss:-8.4632–acc:0.4691 Epoch16/200 81/81[==============================]–0s180us/step–loss:-8.4632–acc:0.4691 Epoch17/200 81/81[==============================]–0s125us/step–loss:-8.4632–acc:0.4691 Epoch18/200 81/81[==============================]–0s183us/step–loss:-8.4632–acc:0.4691 Epoch19/200 81/81[==============================]–0s149us/step–loss:-8.4632–acc:0.4691 Epoch20/200 81/81[==============================]–0s146us/step–loss:-8.4632–acc:0.4691 Epoch21/200 81/81[==============================]–0s206us/step–loss:-8.4632–acc:0.4691 Epoch22/200 81/81[==============================]–0s135us/step–loss:-8.4632–acc:0.4691 Epoch23/200 81/81[==============================]–0s116us/step–loss:-8.4632–acc:0.4691 Epoch24/200 81/81[==============================]–0s135us/step–loss:-8.4632–acc:0.4691 Epoch25/200 81/81[==============================]–0s121us/step–loss:-8.4632–acc:0.4691 Epoch26/200 81/81[==============================]–0s110us/step–loss:-8.4632–acc:0.4691 Epoch27/200 81/81[==============================]–0s104us/step–loss:-8.4632–acc:0.4691 Epoch28/200 81/81[==============================]–0s122us/step–loss:-8.4632–acc:0.4691 Epoch29/200 81/81[==============================]–0s117us/step–loss:-8.4632–acc:0.4691 Epoch30/200 81/81[==============================]–0s111us/step–loss:-8.4632–acc:0.4691 Epoch31/200 81/81[==============================]–0s123us/step–loss:-8.4632–acc:0.4691 Epoch32/200 81/81[==============================]–0s116us/step–loss:-8.4632–acc:0.4691 Epoch33/200 81/81[==============================]–0s120us/step–loss:-8.4632–acc:0.4691 Epoch34/200 81/81[==============================]–0s156us/step–loss:-8.4632–acc:0.4691 Epoch35/200 81/81[==============================]–0s131us/step–loss:-8.4632–acc:0.4691 Epoch36/200 81/81[==============================]–0s122us/step–loss:-8.4632–acc:0.4691 Epoch37/200 81/81[==============================]–0s110us/step–loss:-8.4632–acc:0.4691 Epoch38/200 81/81[==============================]–0s121us/step–loss:-8.4632–acc:0.4691 Epoch39/200 81/81[==============================]–0s123us/step–loss:-8.4632–acc:0.4691 Epoch40/200 81/81[==============================]–0s111us/step–loss:-8.4632–acc:0.4691 Epoch41/200 81/81[==============================]–0s115us/step–loss:-8.4632–acc:0.4691 Epoch42/200 81/81[==============================]–0s119us/step–loss:-8.4632–acc:0.4691 Epoch43/200 81/81[==============================]–0s115us/step–loss:-8.4632–acc:0.4691 Epoch44/200 81/81[==============================]–0s133us/step–loss:-8.4632–acc:0.4691 Epoch45/200 81/81[==============================]–0s114us/step–loss:-8.4632–acc:0.4691 Epoch46/200 81/81[==============================]–0s112us/step–loss:-8.4632–acc:0.4691 Epoch47/200 81/81[==============================]–0s143us/step–loss:-8.4632–acc:0.4691 Epoch48/200 81/81[==============================]–0s124us/step–loss:-8.4632–acc:0.4691 Epoch49/200 81/81[==============================]–0s129us/step–loss:-8.4632–acc:0.4691 Epoch50/200 Thesamegoesonfortherestoftheepochsaswell. Reply JasonBrownlee February2,2019at6:14am # Ihavesomesuggestionsherethatmighthelp: http://machinelearningmastery.com/improve-deep-learning-performance/ Reply Nagesh February4,2019at1:50am # HiJason, Canyoupleaseupdateme,whetherwecanplotagraph(epochvsacc)? Ifyesthenhow. Reply JasonBrownlee February4,2019at5:49am # Ishowhowhere: https://machinelearningmastery.com/display-deep-learning-model-training-history-in-keras/ Reply Nils February5,2019at1:28am # Greatstuff,thanks! Ijustwonderedthatinchapter2thereisadescriptionofthe“init”parameter,butinallsourcesitwasmissing. Iaddeditlike: model.add(Dense(12,input_dim=8,init=’uniform’,activation=’relu’)) ThenIgotthiswarning: pima_diabetes.py:25:UserWarning:UpdateyourDensecalltotheKeras2API:Dense(12,input_dim=8,activation="relu" ,kernel_initializer="uniform") model.add(Dense(12,input_dim=8,init=’uniform’,activation=’relu’)) Solutionformewastousethe“kernel_initializer”instead: model.add(Dense(12,input_dim=8,activation=”relu”,kernel_initializer=”uniform”)) RegardingthesamelineIgotonequestion:Isitcorrect,thatitaddsoneinputlayerwith8neuronsANDanotherhiddenlayerwith12neurons? So,woulditresultinthesameANNtodothis? model.add(Dense(8,input_dim=8,kernel_initializer=’uniform’)) model.add(Dense(8,activation=”relu”,kernel_initializer=’uniform’)) Reply JasonBrownlee February5,2019at8:29am # Yes,perhapsyourversionofthebookisoutofdate,emailmetogetthelatestversion? Yes,thedefinitionofthefirsthiddenlayeralsodefinestheinputlayerviaanargument. Reply Shuja February8,2019at12:00am # HiJason Iamgettingthefollowingerror (env)[email protected]:~$pythonkeras_test.py UsingTensorFlowbackend. Traceback(mostrecentcalllast): File“keras_test.py”,line8,in dataset=numpy.loadtxt(“pima-indians-diabetes.csv”,delimiter=”,”) File“/home/shuja/env/lib/python3.6/site-packages/numpy/lib/npyio.py”,line955,inloadtxt fh=np.lib._datasource.open(fname,‘rt’,encoding=encoding) File“/home/shuja/env/lib/python3.6/site-packages/numpy/lib/_datasource.py”,line266,inopen returnds.open(path,mode,encoding=encoding,newline=newline) File“/home/shuja/env/lib/python3.6/site-packages/numpy/lib/_datasource.py”,line624,inopen raiseIOError(“%snotfound.”%path) OSError:pima-indians-diabetes.csvnotfound. Reply JasonBrownlee February8,2019at7:52am # Lookslikethedatasetwasnotdownloadedandplaceinthesamedirectoryasyourscript. Reply Shubham February12,2019at4:55am # Hi,Jason Thanksforthetutorial. DoyouhavesomegoodreferenceoranexamplewhereIcanlearnaboutsettingup“AdversarialNeuralNetworks”. Shubham Reply JasonBrownlee February12,2019at8:08am # Notatthisstage,Ihopetocoverthetopicinthefuture. Reply Daniel March13,2019at8:14am # HeyJason, I’vebeenreadingyourtutorialsforawhilenowonavarietyofMLtopics,andIthinkthatyouwriteverycleanlyandconcisely.ThankyouformakingalmosteverytopicI’veencounteredunderstandable. However,onethingIhavenoticedisthatthecommentsectionsonyourpagessometimescoverthebulkofthewebpage.ThefirstcoupletimesIsawthissite,IsawhowtinymyscrollbarwasandIassumedthatthetutorialwouldbe15pageslong,onlytofindthatyourintroductionswereinfact“gentle”aspromisedandeverythingbutthefirstsliverofthepagewerepeople’sresponsesandyourresponsesback.Ithinkitwouldbeveryusefulifyoucouldsomehowcondensetheresponses(maybea“showresponses”button?)toonlyshowtheactualcontent.Notonlywouldeverythinglookbetter,butIthinkitwouldalsopreventpeoplefrominitiallythinkingyourblogwasexceptionallylong,likeIdidafewtimes. Reply JasonBrownlee March13,2019at8:26am # Greatfeedback,thanksDaniel.I’llseeiftherearesomegoodwordpresspluginsforthis. Reply ismael March22,2019at5:22am # donotworkwhy Reply JasonBrownlee March22,2019at8:39am # Sorrytohearthatyou’rehavingtrouble,whatistheproblemexactly? Reply FelixDaniel March30,2019at7:09am # Awesomeworkonmachinelearning…IwasjustthinkingonhowtostartmyjourneyintoMachineLearning,IrandomlysearchedforpeopleinMachineLearningonLinkedInthat’showIfindmyselfhere…I’mdelightedtoseethis…HereismyfinalbusstoptostartbuildingupinML.ThanksforacceptingmyconnectiononLinkedIn. IhaveaprojectthatamabouttostartbutIdon’tknowhowandtheroadMap.PleaseIneedyourdetailedguideline. Hereisthetopic HumanActivityRecognitionSystemthatControlsoverweightinChildrenandAdults. Reply JasonBrownlee March31,2019at9:22am # Soundslikeagreatproject,youcangetstartedhere: https://machinelearningmastery.com/start-here/#deep_learning_time_series Reply AkshayaE April13,2019at11:38pm # canyoupleaseexplainmewhyweuse12neuronsinthefirstlayer?8areinputsandaretherest4biases? Reply JasonBrownlee April14,2019at5:49am # No,the12referstothe12nodesinthefirsthiddenlayer,nottheinputlayer. Theinputlayerisdefinedbyainput_dimargumentonthefirsthiddenlayer. Iexplainmorehere: https://machinelearningmastery.com/faq/single-faq/how-do-you-define-the-input-layer-in-keras Reply AkshayaE April14,2019at8:09pm # thankyoufortheimmediateresponse.mydoubthasbeencleared. Reply JasonBrownlee April15,2019at7:52am # Happytohearthat. Reply Abhiram April19,2019at11:50pm # hiiJason,abovepredictionsarebetween0to1,Mylabelsare1,1,1,2,2,2,3,3,3……..36,36,36. Nowiwanttopredictclass36thenwhatshouldido?? Reply JasonBrownlee April20,2019at7:39am # Whatproblemareyouhavingexactly? Reply Akash April22,2019at12:56am # HiJason, IamlearningNLPandfacingdifficultieswithunderstandingNLPwithDeepLearning. Please,canyouhelpwithconvertingthefollowingN:NtoN:1model? Iwanttochangemyvec_yfrommax_input_words_amountlengthto1. HowshouldIdefinethelayersanduseLSTMorRNNor…? ThankYou. x=df1[‘Question’].tolist() y=df1[‘Answer’].tolist() max_input_words_amount=0 tok_x=[] foriinrange(len(x)): tokenized_q=nltk.word_tokenize(re.sub(r”[^a-z0-9]+”,”“,x[i].lower())) max_input_words_amount=max(len(tokenized_q),max_input_words_amount) tok_x.append(tokenized_q) vec_x=[] forsentintok_x: sentvec=[ft_cbow_model[w]forwinsent] vec_x.append(sentvec) vec_y=[] forsentiny: sentvec=[ft_cbow_model[sent]] vec_y.append(sentvec) fortok_sentinvec_x: tok_sent[max_input_words_amount-1:]=[] tok_sent.append(ft_cbow_model[‘_E_’]) fortok_sentinvec_x: iflen(tok_sent)pythonfirstnn.py UsingTheanobackend. Traceback(mostrecentcalllast): File“firstnn.py”,line14,in model.add(Dense(12,input_dim=8,activation=’relu’)) File“C:\Users\Roger\Anaconda3\lib\site-packages\keras\engine\sequential.py”,line165,inadd layer(x) File“C:\Users\Roger\Anaconda3\lib\site-packages\keras\engine\base_layer.py”,line431,in__call__ self.build(unpack_singleton(input_shapes)) File“C:\Users\Roger\Anaconda3\lib\site-packages\keras\layers\core.py”,line866,inbuild constraint=self.kernel_constraint) File“C:\Users\Roger\Anaconda3\lib\site-packages\keras\legacy\interfaces.py”,line91,inwrapper returnfunc(*args,**kwargs) File“C:\Users\Roger\Anaconda3\lib\site-packages\keras\engine\base_layer.py”,line249,inadd_weight weight=K.variable(initializer(shape), File“C:\Users\Roger\Anaconda3\lib\site-packages\keras\initializers.py”,line218,in__call__ dtype=dtype,seed=self.seed) File“C:\Users\Roger\Anaconda3\lib\site-packages\keras\backend\theano_backend.py”,line2600,inrandom_uniform returnrng.uniform(shape,low=minval,high=maxval,dtype=dtype) File“C:\Users\Roger\Anaconda3\lib\site-packages\theano\sandbox\rng_mrg.py”,line872,inuniform rstates=self.get_substream_rstates(nstreams,dtype) File“C:\Users\Roger\Anaconda3\lib\site-packages\theano\configparser.py”,line117,inres returnf(*args,**kwargs) File“C:\Users\Roger\Anaconda3\lib\site-packages\theano\sandbox\rng_mrg.py”,line779,inget_substream_rstates multMatVect(rval[0],A1p72,M1,A2p72,M2) File“C:\Users\Roger\Anaconda3\lib\site-packages\theano\sandbox\rng_mrg.py”,line62,inmultMatVect [A_sym,s_sym,m_sym,A2_sym,s2_sym,m2_sym],o,profile=False) File“C:\Users\Roger\Anaconda3\lib\site-packages\theano\compile\function.py”,line317,infunction output_keys=output_keys) File“C:\Users\Roger\Anaconda3\lib\site-packages\theano\compile\pfunc.py”,line486,inpfunc output_keys=output_keys) File“C:\Users\Roger\Anaconda3\lib\site-packages\theano\compile\function_module.py”,line1841,inorig_function fn=m.create(defaults) File“C:\Users\Roger\Anaconda3\lib\site-packages\theano\compile\function_module.py”,line1715,increate input_storage=input_storage_lists,storage_map=storage_map) File“C:\Users\Roger\Anaconda3\lib\site-packages\theano\gof\link.py”,line699,inmake_thunk storage_map=storage_map)[:3] File“C:\Users\Roger\Anaconda3\lib\site-packages\theano\gof\vm.py”,line1091,inmake_all impl=impl)) File“C:\Users\Roger\Anaconda3\lib\site-packages\theano\gof\op.py”,line955,inmake_thunk no_recycling) File“C:\Users\Roger\Anaconda3\lib\site-packages\theano\gof\op.py”,line858,inmake_c_thunk output_storage=node_output_storage) File“C:\Users\Roger\Anaconda3\lib\site-packages\theano\gof\cc.py”,line1217,inmake_thunk keep_lock=keep_lock) File“C:\Users\Roger\Anaconda3\lib\site-packages\theano\gof\cc.py”,line1157,in__compile__ keep_lock=keep_lock) File“C:\Users\Roger\Anaconda3\lib\site-packages\theano\gof\cc.py”,line1609,incthunk_factory key=self.cmodule_key() File“C:\Users\Roger\Anaconda3\lib\site-packages\theano\gof\cc.py”,line1300,incmodule_key c_compiler=self.c_compiler(), File“C:\Users\Roger\Anaconda3\lib\site-packages\theano\gof\cc.py”,line1379,incmodule_key_ np.core.multiarray._get_ndarray_c_version()) AttributeError:(‘Thefollowingerrorhappenedwhilecompilingthenode’,DotModulo(A,s,m,A2,s2,m2),‘\n’,“module‘numpy.core.multiarray’hasnoattribute‘_get_ndarray_c_version'”) Reply Roger May12,2019at1:58am # IfollowedallthestepstosetuptheenvironmentbutwhenIranthecodeIgotanattributeerror‘module‘numpy.core.multiarray’hasnoattribute‘_get_ndarray_c_version” Reply JasonBrownlee May12,2019at6:45am # Perhapstrysearching/postingonstackoverflow? Reply JasonBrownlee May12,2019at6:45am # Ouch,perhapsnumpyisnotinstalledcorrectly? Reply Roger May12,2019at8:34pm # Nonumpy1.16.2doesnotworkwiththeano1.0.3asservedupcurrentlybyAnaconda.Idowngradedtonumpy1.13.0. Reply JasonBrownlee May13,2019at6:46am # ThanksRoger. Reply Aditya May21,2019at5:02pm # HiJason, Thanksforthisamazingexample! WhatIobserveintheexampleisthedatabaseusedispurelynumeric. Mydoubtis: Howcantheexamplebemodifiedtohandlecategoricalinput? WillitworkiftheinputsareOneHotEncoded? Reply JasonBrownlee May22,2019at7:38am # Yes,youcanuseaonehotencodingforourinputcategoricalvariables. Reply Aditya May31,2019at3:41pm # CanyoupleaseprovideagoodreferencepointforOHEinpython? Thanksinadvance!🙂 Reply JasonBrownlee June1,2019at6:09am # Sure: https://machinelearningmastery.com/why-one-hot-encode-data-in-machine-learning/ Reply Aditya June2,2019at3:36am # Ireadthelinkanditwashelpful.Now,Ihaveadoubtspecifictomynetwork. Ihave3categoricalinputwhichhavedifferentsizes.Onehasaround15‘categories’whiletheothertwohave5.SoafterIOneHotencodeeachofthem,doIhavetomaketheirsizessamebypadding?Orit’llworkasitit? JasonBrownlee June2,2019at6:42am # Youcanencodeeachvariableandconcatenatethemtogetherintoonevector. Oryoucanhaveamodelwithoneinputforeachvariableandletthemodelconcatenatethem. Sri June17,2019at7:29pm # Hi, Ifthereisoneindependentvariable(saycountry)withmorethan100labels,howtoresolveit. Ithinkonlyonehotencodingwillnotworkincludingscaling. Isthereanyalternativeforit Reply JasonBrownlee June18,2019at6:37am # Youcantry: –integerencoding –onehotencoding –embedding Testeachandseewhatworksbestforyourspecificdataset. Reply MK June21,2019at7:05pm # Hijason, thanksalotforyourposts,helpedmealot. 1.HowcanIaddconfusionmatrix? 2.HowcanIchangelearningrate? CheersMartin Reply JasonBrownlee June22,2019at6:35am # Addaconfusionmatrix: https://machinelearningmastery.com/custom-metrics-deep-learning-keras-python/ Tunelearningrate: https://machinelearningmastery.com/understand-the-dynamics-of-learning-rate-on-deep-learning-neural-networks/ Reply Guhanpalanivel July1,2019at10:35pm # hijason, Ihavetrainedaneuralnetworkmodelwith6monthsdataanddeployedataremotesite, whenreceivingthenewdataforupcomingmonths, isthereanywaytoautomaticallyupdatethemodelwithadditionofnewtrainingdata? Reply JasonBrownlee July2,2019at7:31am # Yes,perhapstheeasiestwayistorefitthemodelonthenewdataoronallavailabledata. Reply Shubham July5,2019at8:46pm # Hijason, Iwanttoprinttheneuralnetworkscoreasafunctionofoneofthevariable.,howdoidothat? Regards Shubham Reply JasonBrownlee July6,2019at8:35am # Perhapstryalinearactivationunitandamselossfunction? Reply MahaLakshmi July17,2019at7:37pm # Sir,Iamworkingwithsklearn.neural_network.MLPClassifierinPython.nowIwanttogivemyownInitialWeightstoClassifier.howtodothat?pleasehelpme.ThanksinAdvance Reply JasonBrownlee July18,2019at8:25am # Sorry,Idon’thaveanexampleofthis. Perhapstrypostingonstackoverflow? Reply MahaLakshmi July18,2019at4:09pm # Thankyouforyourresponse Reply Ron July24,2019at8:39am # Normalizationofthedataincreasestheaccuracyinthe90’s. https://stackoverflow.com/questions/39525358/neural-network-accuracy-optimization Reply JasonBrownlee July24,2019at2:19pm # Thanksforsharing. Reply Hammad July29,2019at6:12pm # Dearsir, Iwouldliketoapplyabovesharedexampleonarraysproducedby“train_test_split”butitdoesnotwork,asthesearraysarenotintheformofnumpy. Letmegiveyouthedetails,Ihave“XYZ”dataset.Thedatasethasthefollowingspecifications: TotalImages=630 2500featureshasbeenextractedfromeachimage.Eachfeaturehasfloattype. TotalClasses=7 Now,afterprocessingthefeaturefile,Ihavegotresultsinthefollowingvariables: XData:containsfeaturesdataintwodimensionalarrayform(rows:630,columns:2500) YData:containoriginallabelsofclassesinonedimensionalarrayform(rows:630,column:1) So,byusingthefollowingcode,Isplitthedatasetintotrainandtestingdata: fromsklearn.model_selectionimporttrain_test_split x_train,x_test,y_train,y_test=train_test_split(XData,YData,stratify=YData,test_size=0.25) Now,Iwouldliketoapplythedeep-learningexamplessharedonthisblogonmydatasetwhichisnowintheformarrays,andgenerateoutputaspredictionoftestingdataandaccuracy. Canyoupleaseletmeknowaboutit,whichcanworkontheabovearrays? Reply JasonBrownlee July30,2019at6:05am # Yes,theKerasmodelcanoperateonnumpyarraysdirectly. PerhapsIdon’tfollowtheproblemthatyou’rehavingexactly? Reply Hammad July30,2019at6:01pm # Dearsir, Thanks,Iconvertedmyarraysintonumpyformat. Now,Ihavefollowedyourtutorialonmulti-classificationproblem(https://machinelearningmastery.com/multi-class-classification-tutorial-keras-deep-learning-library/)andusethefollowingcode: ############################################################ importpandas fromkeras.modelsimportSequential fromkeras.layersimportDense fromkeras.wrappers.scikit_learnimportKerasClassifier fromkeras.utilsimportnp_utils fromsklearn.model_selectionimportcross_val_score fromsklearn.model_selectionimportKFold fromsklearn.preprocessingimportLabelEncoder fromsklearn.pipelineimportPipeline fromsklearn.metricsimportaccuracy_score seed=5 totalclasses=7#ClassLabelsare:‘p1’,‘p2’,‘p3’,‘p4’,‘p5’,‘p6’,‘p7′ totalimages=630 totalfeatures=2500#featuresgeneratedfromimages #Datahasbeenimportedfromfeaturefile,whichresultstwoarraysXDataandYData #XDatacontainsfeaturesdatasetwithoutnumpyarrayform #YDatacontainslabelswithoutnumpyarrayform #encodeclassvaluesasintegers encoder=LabelEncoder() encoder.fit(YData) encoded_Y=encoder.transform(YData) #convertintegerstodummyvariables(i.e.onehotencoded) dummy_y=np_utils.to_categorical(encoded_Y) #definebaselinemodel defbaseline_model(): #createmodel model=Sequential() model.add(Dense(8,input_dim=totalfeatures+1,activation=’relu’)) model.add(Dense(totalclasses,activation=’softmax’)) #Compilemodel model.compile(loss=’categorical_crossentropy’,optimizer=’adam’,metrics= ‘accuracy’]) returnmodel estimator=KerasClassifier(build_fn=baseline_model,nb_epoch=200,batch_size=5,verbose=0) x_train,x_test,y_train,y_test=train_test_split(XData,dummy_y,test_size=0.25,random_state=seed) x_train=np.array(x_train) x_test=np.array(x_test) y_train=np.array(y_train) y_test=np.array(y_test) estimator.fit(x_train,y_train) predictions=estimator.predict(x_test) print(predictions) print(encoder.inverse_transform(predictions)) ######################################################## Thecodegeneratesnosyntaxerror. Now,Iwouldliketoask: 1.DoesIhaveappliedthedeeplearning(NeuralNetworkModel)inarightway? 2.HowcouldIcalculatetheaccuracy,confusionmatrix,andclassification_report? 3.CanyoupleasesuggestwhatothertypeofdeeplearningalgorithmscouldIapplyonthistypeofproblem? Afterapplyingdifferentdeeplearningalgorithm,Iwouldliketocomparetheiraccuraciessuchas,youdidintutorialhttps://machinelearningmastery.com/machine-learning-in-python-step-by-step/,byplottinggraphs. Reply JasonBrownlee July31,2019at6:46am # Sorry,Idon’thavethecapacitytoreviewyourcode. Thispostshowshowtocalculatemetrics: https://machinelearningmastery.com/how-to-calculate-precision-recall-f1-and-more-for-deep-learning-models/ Irecommendtestingasuiteofmethodsinordertodiscoverwhatworksbestforyourspecificdataset: https://machinelearningmastery.com/faq/single-faq/what-algorithm-config-should-i-use Reply Tyson September3,2019at10:07pm # HiJason, Greattutorial.IamnowtryingnewdatasetsfromtheUCIarchive.HoweverIamrunningintoproblemswhenthedataisincomplete.Ratherthananumberthereisa‘?’indicatingthatthedataismissingorunknown.SoIamgetting ValueError:couldnotconvertstringtofloat:‘?’ Isthereawaytoignorethatdata?Iamsuremanydatasetshavethisissuewherepiecesaremissing. Thanksinadvance! Reply JasonBrownlee September4,2019at5:58am # Yes,youcanreplacemissingdatawiththemeanormedianofthevariable–atleastasastartingpoint. Reply Srinu September10,2019at9:07pm # CanyouprovideGUIcodeforthesamedatalikecallingtheANNmodelfromawebsiteorfromandroidapplication. Reply JasonBrownlee September11,2019at5:33am # Idon’tseewhynot. Reply HemanthKumar September20,2019at12:58pm # dearsir ValueError:Errorwhencheckinginput:expectedconv2d_5_inputtohave4dimensions,butgotarraywithshape(250,250,3) Iamgettingthiserror whatstepsIdid original_image->resizedtosameresolution->convertedtonumpyarray->savedandloadedtox_train->fedintonetworkmodel->modal.fit(x_train..gettingthiserror Reply JasonBrownlee September20,2019at1:42pm # Perhapsstartwiththistutorialforimageclassification: https://machinelearningmastery.com/how-to-develop-a-convolutional-neural-network-to-classify-photos-of-dogs-and-cats/ Reply HemanthKumar September20,2019at3:14pm # thanksforresponsesir🙂 afterthatIamgettinglistindexoutofrangeerroratmodel.fit Reply JasonBrownlee September21,2019at6:43am # I’msorrytohearthat,Ihavesomesuggestionsherethatmayhelp: https://machinelearningmastery.com/faq/single-faq/why-does-the-code-in-the-tutorial-not-work-for-me Reply AnthonyTheKoala September26,2019at2:58am # DearDrJason, Thankyouforthistutorial. Ihavebeenplayingaroundwiththenumberoflayersandthenumberofneurons. Inthecurrentcode model=Sequential() model.add(Dense(12,input_dim=8,activation='relu')) model.add(Dense(8,activation='relu')) model.add(Dense(1,activation='sigmoid')) 1234 model=Sequential()model.add(Dense(12,input_dim=8,activation='relu'))model.add(Dense(8,activation='relu'))model.add(Dense(1,activation='sigmoid')) Ihaveplayedaroundwithincreasingthenumbersinthefirstlayer: model=Sequential() model.add(Dense(100,input_dim=8,activation='relu')) model.add(Dense(8,activation='relu')) model.add(Dense(1,activation='sigmoid')) 1234 model=Sequential()model.add(Dense(100,input_dim=8,activation='relu'))model.add(Dense(8,activation='relu'))model.add(Dense(1,activation='sigmoid')) Theresultisthattheaccuracydidn’timprovemuch. Therewasanimprovementintheadditionoflayers. Wheneachlayerhadsayalargenumberofneurons,theaccuracyimproved. Thisisnottheonlyexample,butplayingaroundwiththefollowingcode: model=Sequential() model.add(Dense(200,input_dim=8,activation='relu')) model.add(Dense(800,activation='relu')) model.add(Dense(200,activation='relu')) model.add(Dense(400,activation='relu')) model.add(Dense(200,activation='relu')) model.add(Dense(1,activation='sigmoid')) 1234567 model=Sequential()model.add(Dense(200,input_dim=8,activation='relu'))model.add(Dense(800,activation='relu'))model.add(Dense(200,activation='relu'))model.add(Dense(400,activation='relu'))model.add(Dense(200,activation='relu'))model.add(Dense(1,activation='sigmoid')) Theaccuracyachievedwas91.1% Iaddedtwomorelayers model=Sequential() model.add(Dense(200,input_dim=8,activation='relu')) model.add(Dense(800,activation='relu')) model.add(Dense(200,activation='relu')) model.add(Dense(400,activation='relu')) model.add(Dense(200,activation='relu')) model.add(Dense(400,activation='relu')) model.add(Dense(800,activation='relu')) model.add(Dense(1,activation='sigmoid')) 123456789 model=Sequential()model.add(Dense(200,input_dim=8,activation='relu'))model.add(Dense(800,activation='relu'))model.add(Dense(200,activation='relu'))model.add(Dense(400,activation='relu'))model.add(Dense(200,activation='relu'))model.add(Dense(400,activation='relu'))model.add(Dense(800,activation='relu'))model.add(Dense(1,activation='sigmoid')) Theaccuracydroppedslightlyto88% Fromthesebriefexperiments,increasingthenumberofneuronsasinyourfirstexampledidnotincreaseaccuracy. Howeveraddingmorelayersespeciallywithalargenumberofneuronsdidincreasetheaccuracytoabout91% BUTiftherearetoomanylayersthereisaslightdropinaccuracyto88%. Myquestionisthereawaytoincreasetheaccuracyanyfurtherthan91%? Thankyou, AnthonyofSydney Reply JasonBrownlee September26,2019at6:45am # Ifthisisthepimaindiansdataset,thenthebestaccuracyisabout78%via10-foldcrossvalidation,anythingmoreisprobablyoverfitting. Yes,Ihavetonsoftutorialsondiagnosingissueswithmodelsandliftingperformance,youcanstarthere: https://machinelearningmastery.com/start-here/#better Reply AnthonyTheKoala September26,2019at6:05am # DearDrJason, Furtherexperimentation,Iplayedwiththefollowingcode model=Sequential() model.add(Dense(25,input_dim=8,activation='relu')) model.add(Dense(89,activation='relu')) model.add(Dense(377,activation='relu')) model.add(Dense(233,activation='relu')) model.add(Dense(55,activation='relu')) model.add(Dense(1,activation='sigmoid')) 1234567 model=Sequential()model.add(Dense(25,input_dim=8,activation='relu'))model.add(Dense(89,activation='relu'))model.add(Dense(377,activation='relu'))model.add(Dense(233,activation='relu'))model.add(Dense(55,activation='relu'))model.add(Dense(1,activation='sigmoid')) Iobtainedanaccuracyof95%byplayingaroundwiththenumberofneuronsincreasingthendecreasing. Icannotworkoutasystematicwayofimprovingtheaccuracy. Thankyou, AnthonyofSydney Reply JasonBrownlee September26,2019at6:46am # Haha,yes.Thatisthegreatopenproblemwithneuralnets(nogoodtheoriesforhowtoconfigurethem)andwhywemustuseempiricalmethods. Reply AnthonyTheKoala September26,2019at1:57pm # DearDrJason, thankyouforthosereplies. Yes,itwasthePimaIndiandatasetthatiscoveredinthistutorial. BeforeIindulgeinfurtherreadingson10-foldcrossvalidation,pleasebrieflyanswer: *whatisthemeaningofoverfit. *whyisanaccuracyof96%regardedasoverfit. Todo: Playaroundwithsimplefunctionsandplayaroundwiththistutorialandthenlookatoverfitting: Forexamplesupposewehavex=0,1,2,3,4,5andf(x)=x^2 x:0,1,2,3,4,5 f(x):0,1,4,9,16,25 12 x    :0,1,2,3,4,5f(x)  :0,1,4,9,16,25 Theaim: *toseeifthereisanaccuratemappingofthefunctionofxandf(x)forx=0..5 *toseewhathappenswhenwepredictforx=6,7,8.Willitbe36,49,64? *weaskifthereissuchathingasoverfittingthemodelexists. Thankyou, AnthonyofSydney Reply JasonBrownlee September27,2019at7:43am # Overfitmeansbetterperformanceonthetrainingsetatthecostofperformingworseonthetestset. Itcanalsomeanbetterperformanceonatest/validationsetatthecostofworseperformanceonnewdata. Iknowfromexperiencethatthelimitonthatdatasetis77-78%afterhavingworkedwithitintutorialsforabout20years. Reply Andrey September29,2019at8:32pm # HiJason, Iseethedataisnotdividedforthatoftrainingandforthetest.Whyisthat?Whatdoespredictionmeaninthiscase? Andrey Reply JasonBrownlee September30,2019at6:07am # Itmightmeanthattheresultisalittleoptimistic. Ididthattokeepthisexampleverysimpleandeasytofollow. Reply AnthonyTheKoala September29,2019at9:00pm # DearDrJason, Itriedtodothesameforadeterministicmodelofxandfxwherex=[0,1,2,3,4,5]andfx=x**2 Iwanttoseehowmachinelearningoperateswithadeterministicfunction. HoweverIamonlygetting16.67%accuracy. Hereisthecodebasedonthethistutorial fromkeras.modelsimportSequential fromkeras.layersimportDense importnumpyasnp #Aimistoseehowadeterministicfunctionwilloperateusingmachinelearning #Inyear7algebrawehavexandy.yisknownasf(x). #Sohereweaimtohaveastructoreof[indepvar,depvar] #thatis[x,fx] #Makinga2D(like)list x=[iforiinrange(6)];#havealistofx=[0,1,2,3,4,5] #x=np.array(x) fx=[x**2forxinx];#havealistoffx=[0,1,4,9,16,25] #fx=np.array(fx) model=Sequential() model.add(Dense(100,input_dim=1,activation='relu')) model.add(Dense(200,activation='relu')) model.add(Dense(1,activation='relu')) model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) model.fit(x,fx,epochs=150,batch_size=2,verbose=0) _,accuracy=model.evaluate(x,fx) print('Accuracy:%.2f'%(accuracy*100)) 123456789101112131415161718192021222324252627 fromkeras.modelsimportSequentialfromkeras.layersimportDenseimportnumpyasnp #Aimistoseehowadeterministicfunctionwilloperateusingmachinelearning#Inyear7algebrawehavexandy.yisknownasf(x). #Sohereweaimtohaveastructoreof[indepvar,depvar]#thatis[x,fx] #Makinga2D(like)listx=[iforiinrange(6)];#havealistofx=[0,1,2,3,4,5]#x=np.array(x) fx=[x**2forxinx];#havealistoffx=[0,1,4,9,16,25]#fx=np.array(fx) model=Sequential()model.add(Dense(100,input_dim=1,activation='relu'))model.add(Dense(200,activation='relu'))model.add(Dense(1,activation='relu'))  model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])model.fit(x,fx,epochs=150,batch_size=2,verbose=0)_,accuracy=model.evaluate(x,fx)print('Accuracy:%.2f'%(accuracy*100)) Weknowthatfx=x**2ispredictable.WhatdoIneedtodo. Thankyou, AnthonyofSydney Reply JasonBrownlee September30,2019at6:10am # Perhapsyouneedhundredsofthousandsofexamples? Andperhapsthemodelwillneedtobetunedforyourproblem,e.g.perhapsusingmselossandalinearactivationfunctionintheoutputlayerbecauseitisaregressionproblem. Reply AnthonyTheKoala October1,2019at5:15am # DearDrJason, Itriedwithmse-lossandlinearactivationfunctionandstillonlyobtained1%accuracy. fromkeras.modelsimportSequential fromkeras.layersimportDense importnumpyasnp #Aimistoseehowadeterministicfunctionwilloperateusingmachinelearning #Inyear7algebrawehavexandy.yisknownasf(x). x=[iforiinrange(100)];#havealistofx=[0,1,2,3,4,5] x=np.array(x) fx=[x**2forxinx];#havealistoffx=[0,1,4,9,16,25] fx=np.array(fx) model=Sequential() model.add(Dense(12,input_dim=1,activation='linear')) model.add(Dense(33,activation='linear')) model.add(Dense(1,activation='linear')) #model.compile(loss='mean_squared_error',optimizer='softmax',metrics=['accuracy']) model.compile(loss='mean_squared_error',optimizer='sgd') #model.compile(loss='mean_squared_error') model.fit(x,fx,epochs=10,batch_size=1,verbose=0) _,accuracy=model.evaluate(x,fx) print('Accuracy:%.2f'%(accuracy*100)) 1234567891011121314151617181920212223242526 fromkeras.modelsimportSequentialfromkeras.layersimportDenseimportnumpyasnp #Aimistoseehowadeterministicfunctionwilloperateusingmachinelearning#Inyear7algebrawehavexandy.yisknownasf(x). x=[iforiinrange(100)];#havealistofx=[0,1,2,3,4,5]x=np.array(x) fx=[x**2forxinx];#havealistoffx=[0,1,4,9,16,25]fx=np.array(fx) model=Sequential()model.add(Dense(12,input_dim=1,activation='linear'))model.add(Dense(33,activation='linear'))model.add(Dense(1,activation='linear'))  #model.compile(loss='mean_squared_error',optimizer='softmax',metrics=['accuracy'])model.compile(loss='mean_squared_error',optimizer='sgd')#model.compile(loss='mean_squared_error') model.fit(x,fx,epochs=10,batch_size=1,verbose=0)_,accuracy=model.evaluate(x,fx)print('Accuracy:%.2f'%(accuracy*100)) HoweverIgetthis: 32/100[========>.....................]-ETA:0s 100/100[==============================]-0s312us/step Traceback(mostrecentcalllast): File"C:\Python36\deterministicII.py",line25,in _,accuracy=model.evaluate(x,fx) TypeError:'float'objectisnotiterable 123456 32/100[========>.....................]-ETA:0s100/100[==============================]-0s312us/stepTraceback(mostrecentcalllast):  File"C:\Python36\deterministicII.py",line25,in    _,accuracy=model.evaluate(x,fx)TypeError:'float'objectisnotiterable Iwanttomapadeterministicfunctiontoseeifmachinelearningwillworkoutf(x)withouttheformula. Reply JasonBrownlee October1,2019at7:00am # Accuracyisnotavalidmetricforregressionproblems: https://machinelearningmastery.com/faq/single-faq/how-do-i-calculate-accuracy-for-regression Youareveryclose! Also,tryamuchlargerdatasetofexamples.Hundredsorthousands. Reply AnthonyTheKoala October1,2019at10:18am # DearDrJason, Iremovedthemodel.evaluatefromtheprogram.BUTstillIhavenotgotasatisfactorymatchoftheexpectedandactualvalues. fromkeras.modelsimportSequential fromkeras.layersimportDense importnumpyasnp #Aimistoseehowadeterministicfunctionwilloperateusingmachinelearning #Inyear7algebrawehavexandy.yisknownasf(x). x=[iforiinrange(100)];#havealistofx=[0,1,2,3,4,5] x=np.array(x) fx=[x**2forxinx];#havealistoffx=[0,1,4,9,16,25] fx=np.array(fx) model=Sequential() model.add(Dense(100,input_dim=1,activation='linear')) model.add(Dense(100,activation='linear')) model.add(Dense(1,activation='linear')) model.compile(loss='mean_squared_error',optimizer='adam') model.fit(x,fx,epochs=1000,batch_size=1000,verbose=0) #Removingthemodel.evaluatecode predictions=model.predict_classes(x) foriinrange(6): print('%s=>%d(expected%d)'%(x[i],predictions[i],fx[i])) 1234567891011121314151617181920212223242526 fromkeras.modelsimportSequentialfromkeras.layersimportDenseimportnumpyasnp #Aimistoseehowadeterministicfunctionwilloperateusingmachinelearning#Inyear7algebrawehavexandy.yisknownasf(x). x=[i  foriinrange(100)];#havealistofx=[0,1,2,3,4,5]x=np.array(x)   fx=[x**2  forxinx];#havealistoffx=[0,1,4,9,16,25]fx=np.array(fx) model=Sequential()model.add(Dense(100,input_dim=1,activation='linear'))model.add(Dense(100,activation='linear'))model.add(Dense(1,activation='linear'))model.compile(loss='mean_squared_error',optimizer='adam') model.fit(x,fx,epochs=1000,batch_size=1000,verbose=0) #Removingthemodel.evaluatecode predictions=model.predict_classes(x)foriinrange(6):    print('%s=>%d(expected%d)'%(x[i],predictions[i],fx[i])) Output 0=>0(expected0) 1=>0(expected1) 2=>0(expected4) 3=>0(expected9) 4=>1(expected16) 5=>1(expected25) 123456 0=>0(expected0)1=>0(expected1)2=>0(expected4)3=>0(expected9)4=>1(expected16)5=>1(expected25) Notyetgettingamatchoftheexpectedandtheactualvalues Thankyou, AnthonyofSydney Reply JasonBrownlee October1,2019at2:17pm # Perhapsthemodelarchitecture(layersandnodes)needstuning? Perhapsthelearningrateneedstuning? Perhapsyouneedmoretrainingexamples? Perhapsyouneedmoreorfewerepochs? … Moreideashere: https://machinelearningmastery.com/start-here/#better Reply AnthonyTheKoala October2,2019at7:41am # DearDrJason, Icannotfindasystematicwaytofindawayforamachinelearningalgorithmtouseittocomputeadeterministicequationsuchasy=f(x)wheref(x)=x**2. Iamstillhavingtrouble.Iwillbepostingthisonthepage.Essentiallyis(i)adding/droppinglayers,(ii)adjustingthenumberofepochs,(iii)adjustingthebatch_size.ButIhaven’tcomecloseyet. Alsousingthefunctionmodel.predictratherthanmodel.predict_classes. Hereistheprogramwithmostofthecommentedoutlinesdeleted. fromkeras.modelsimportSequential fromkeras.layersimportDense importnumpyasnp #Aimistoseehowadeterministicfunctionwilloperateusingmachinelearning #Inyear7algebrawehavexandy.yisknownasf(x).Herey=f(x)=x**2 x=[iforiinrange(100)];#havealistofx=[0,1,2,3,4,5,.....,99] x=np.array(x) fx=[x**2forxinx];#havealistoffx=x**2=[0,1,4,9,16,25,...,9801] fx=np.array(fx) model=Sequential() model.add(Dense(55,input_dim=1,activation='linear')) model.add(Dense(34,activation='linear')) model.add(Dense(21,activation='linear')) model.add(Dense(13,activation='linear')) model.add(Dense(1,activation='linear')) model.compile(loss='mean_squared_error',optimizer='adam') model.fit(x,fx,epochs=89,batch_size=144,verbose=0) predictions=model.predict(x);#Thisseemstoworkinsteadofmodel.predict_classes print("x,predicted,expected") foriinrange(6): print('%s=>%d(expected%d)'%(x[i],predictions[i],fx[i])) 1234567891011121314151617181920212223242526272829 fromkeras.modelsimportSequentialfromkeras.layersimportDenseimportnumpyasnp #Aimistoseehowadeterministicfunctionwilloperateusingmachinelearning#Inyear7algebrawehavexandy.yisknownasf(x).Herey=f(x)=x**2 x=[i  foriinrange(100)];#havealistofx=[0,1,2,3,4,5,.....,99]x=np.array(x)   fx=[x**2  forxinx];#havealistoffx=x**2=[0,1,4,9,16,25,...,9801]fx=np.array(fx) model=Sequential()model.add(Dense(55,input_dim=1,activation='linear'))model.add(Dense(34,activation='linear'))model.add(Dense(21,activation='linear'))model.add(Dense(13,activation='linear'))model.add(Dense(1,activation='linear')) model.compile(loss='mean_squared_error',optimizer='adam') model.fit(x,fx,epochs=89,batch_size=144,verbose=0) predictions=model.predict(x);#Thisseemstoworkinsteadofmodel.predict_classes print("x,predicted,expected")foriinrange(6):    print('%s=>%d(expected%d)'%(x[i],predictions[i],fx[i])) Theoutputis: x,predicted,expected 0=>29(expected0) 1=>110(expected1) 2=>191(expected4) 3=>272(expected9) 4=>353(expected16) 5=>434(expected25) 1234567 x,predicted,expected0=>29(expected0)1=>110(expected1)2=>191(expected4)3=>272(expected9)4=>353(expected16)5=>434(expected25) NomatterhowmuchIadjustthenumberofneuronsperlayer,thenumberoflayers,thenoofepochsandthebatchsize,the“predicted”appearslikeanarithmeticprogression,notageometricprogression. Notethetermstn+1–tnis81forallthepredictedvaluesinthemachinelearningmodel. BUTweknowthatthedifferencebetweensuccessivetermsiny=f(x)isnotthesame. Forexample,innonlinearrelationsuchasf(x)=x**2,f(x)=0,1,2,4,9,16,25,36,thedifferencebetweenthetermsis:1,1,2,5,7,9,11,thatistn+1–tn!=tn+2–tn+1. Sostillhavingtroubleworkingouthowtogetamachinelearningalgorithmevaluatef(x)withouttheformula. Reply JasonBrownlee October2,2019at8:15am # Hereisthesolution,hopeithelps #fitanmlponxvsx^2 fromsklearn.preprocessingimportMinMaxScaler fromkeras.modelsimportSequential fromkeras.layersimportDense fromnumpyimportasarray frommatplotlibimportpyplot #definedata x=asarray([iforiinrange(1000)]) y=asarray([a**2forainx]) #reshapeintorowsandcols x=x.reshape((len(x),1)) y=y.reshape((len(y),1)) #scaledata x_s=MinMaxScaler() x=x_s.fit_transform(x) y_s=MinMaxScaler() y=y_s.fit_transform(y) #fitamodel model=Sequential() model.add(Dense(10,input_dim=1,activation='relu')) model.add(Dense(1)) model.compile(loss='mse',optimizer='adam') model.fit(x,y,epochs=150,batch_size=10,verbose=0) mse=model.evaluate(x,y,verbose=0) print(mse) #predict yhat=model.predict(x) #plotrealvspredicted pyplot.plot(x,y,label='y') pyplot.plot(x,yhat,label='yhat') pyplot.legend() pyplot.show() 1234567891011121314151617181920212223242526272829303132 #fitanmlponxvsx^2fromsklearn.preprocessingimportMinMaxScalerfromkeras.modelsimportSequentialfromkeras.layersimportDensefromnumpyimportasarrayfrommatplotlibimportpyplot#definedatax=asarray([iforiinrange(1000)])y=asarray([a**2forainx])#reshapeintorowsandcolsx=x.reshape((len(x),1))y=y.reshape((len(y),1))#scaledatax_s=MinMaxScaler()x=x_s.fit_transform(x)y_s=MinMaxScaler()y=y_s.fit_transform(y)#fitamodelmodel=Sequential()model.add(Dense(10,input_dim=1,activation='relu'))model.add(Dense(1))model.compile(loss='mse',optimizer='adam')model.fit(x,y,epochs=150,batch_size=10,verbose=0)mse=model.evaluate(x,y,verbose=0)print(mse)#predictyhat=model.predict(x)#plotrealvspredictedpyplot.plot(x,y,label='y')pyplot.plot(x,yhat,label='yhat')pyplot.legend()pyplot.show() Iguessyoucouldalsodoaninverse_transform()onthepredictedvaluestogetbacktooriginalunits. Reply AnthonyTheKoala October2,2019at9:05am # DearDrJason, Thankyouverymuchforyourreply.Igotanmseintheorderof3x10**-6. Despitethis,Iwillbestudyingtheprogramandlearnmyselfabout(i)theMinMaxScalerandwhyweuseit,(ii)fit_transform(y)and(iii)onehiddenlayerof10neurons,and(iii)Iwillstillhavetolearnaboutthechoiceofactivationfunctionandlossfunctions.Thekeraswebsitehasasectiononlossfunctionsathttps://keras.io/losses/buthavingalookatthePython“IDLE”program,alookatfromkerasimportlosses,therearemanymorelossfunctionswhicharenecessarytocompileamodel. Inaddition,thepredictedvalueswillhavetobere-computedtoitsunscaledvalues.SoIwillalsolookup‘rescaling’. Thankyouagain, Anthony,SydneyNSW Reply JasonBrownlee October2,2019at10:10am # Yes,youcanuseinverse_transformtounscalethepredictions,asImentioned. Reply AnthonyTheKoala October3,2019at6:26am # DearDrJason, Iknowhowtousetheinverse_transformfunction: FirstapplytheMinMaxScalertoscaleto0to1 x_s=MinMaxScaler() x=x_s.fit_transform(x) y_s=MinMaxScaler() y=y_s.fit_transform(y) 1234 x_s=MinMaxScaler()x=x_s.fit_transform(x)y_s=MinMaxScaler()y=y_s.fit_transform(y) Ifwewanttoreconstitutexandy,itissimpleto: x_original=x_s.inverse_transform(x);#wherexwastransformed/scaled y_original=y_s.inverse_transform(y);#whereywastransformed/scaled 12 x_original=x_s.inverse_transform(x);#wherexwastransformed/scaledy_original=y_s.inverse_transform(y);#whereywastransformed/scaled x_sandy_shastheminandmaxvaluesstoredoftheoriginalpre-transformeddata. BUThowdoyoutransformyhattoitsoriginalscalewhenitwasnotsubjecttotheinverse_transformfunction. IfIreliedonthey_s.inverse_transform(yhat),whereyougetthis: yhat_restored=y_s.inverse_transform(yhat);#usingthevaluesofyminandymaxoforiginaldata yhat_restored[0:10] array([[6838.43], [6838.43], [6838.43], [6838.43], [6838.43], [6838.43], [6838.43], [6838.43], [6838.43], [6838.43]],dtype=float32) 123456789101112 yhat_restored=y_s.inverse_transform(yhat);#usingthevaluesofyminandymaxoforiginaldatayhat_restored[0:10]array([[6838.43],      [6838.43],      [6838.43],      [6838.43],      [6838.43],      [6838.43],      [6838.43],      [6838.43],      [6838.43],      [6838.43]],dtype=float32) Iwas‘hoping’forsomethingclosetotheoriginal: >>>y_restored=y_s.inverse_transform(y) >>>y_restored[0:10] array([[0.], [1.], [4.], [9.], [16.], [25.], [36.], [49.], [64.], [81.]]) 123456789101112 >>>y_restored=y_s.inverse_transform(y)>>>y_restored[0:10]array([[0.],      [1.],      [4.],      [9.],      [16.],      [25.],      [36.],      [49.],      [64.],      [81.]]) BUTyhatdoesnotusetheMinMaxScaleratthestart. DoIhavetorewritemyownfunction? Thanks, AnthonyofSydneyNSW Reply JasonBrownlee October3,2019at6:54am # Themodelpredictsscaledvalues,applytheinversetransformonyhatdirectly. Reply AnthonyTheKoala October3,2019at2:39pm # DearDrJason, Ididthatapplytheinversetransformofyhatdirectly,BUTGOTthese Cutdownversionofcode y_s=MinMaxScaler() y=y_s.fit_transform(y);#y_sstorestheminandmaxvaluesaccordingtothesklearndoc #Notetheaboveisfory.WEDON'TKNOWyhat(min)&yhat(max yhat=model.predict(x);#wehavethescaledestimate. x_original=x_s.inverse_transform(x);#thisprintedokay #Printoutofyhattransformed #Calculateyhatscaledusingtheminandmaxvaluesoff(x)=y yhat_restored=y_s.inverse_transform(yhat) #Printyhat print(yhat_restored[0:10]) array([[6838.43], [6838.43], [6838.43], [6838.43], [6838.43], [6838.43], [6838.43], [6838.43], [6838.43], [6838.43]],dtype=float32) 123456789101112131415161718192021222324252627 y_s=MinMaxScaler()y=y_s.fit_transform(y);#y_sstorestheminandmaxvaluesaccordingtothesklearndoc                                      #Notetheaboveisfory.WEDON'TKNOWyhat(min)&yhat(max  yhat=model.predict(x);  #wehavethescaledestimate.  x_original=x_s.inverse_transform(x);#thisprintedokay #Printoutofyhattransformed#Calculateyhatscaledusingtheminandmaxvaluesoff(x)=y yhat_restored=y_s.inverse_transform(yhat) #Printyhatprint(yhat_restored[0:10])array([[6838.43],      [6838.43],      [6838.43],      [6838.43],      [6838.43],      [6838.43],      [6838.43],      [6838.43],      [6838.43],      [6838.43]],dtype=float32) Don’tunderstandhowtogetaninversetransformofyhatwhenIdon’tknowthe‘untransformed’valuebecauseIhavenotestimatedit. Thankyou, AnthonyofSydney Reply JasonBrownlee October4,2019at5:39am # Youcaninversetransformyandyhatandplotboth. Reply AnthonyTheKoala October4,2019at3:20am # DearDrJason, Itrieditagaintoillustratethatdespitethepredictedfittingaparabolaforscaledpredictedandexpectedvaluesoff(x)theresultingvalueswhen‘unscaled’backtotheoriginaldoesseemsquiteabsurd. Code–relevant #plotrealvspredicted pyplot.plot(x,y,label='y') pyplot.plot(x,yhat,label='yhat') pyplot.legend() print("Thegraphofthe(x,predictedf(x)and(x,f(x)isonaseparatewindow") pyplot.show() y_predicted=y_s.inverse_transform(yhat) y_expected=y_s.inverse_transform(y) x_original=x_s.inverse_transform(x) #print(y_predicted[0:10,].tolist(),x_original[0:10,].tolist()) print("Printingthefirst10,predicted,expected,andx") foriinrange(10): print(y_predicted[i],y_expected[i],x_original[i]) print("let'strysomeotherarbitrarysection,say10:20") #print(y_predicted[9:21,].tolist(),y_predicted[9:21,].tolist(),x_original[9:21,].tolist()) print("printing10thto20th,predicted,expected,andx") foriinrange(10): print(y_predicted[i+10],y_expected[i+10],x_original[i+10]) 123456789101112131415161718192021 #plotrealvspredictedpyplot.plot(x,y,label='y')pyplot.plot(x,yhat,label='yhat')pyplot.legend()print("Thegraphofthe(x,predictedf(x)and(x,f(x)isonaseparatewindow")pyplot.show()y_predicted=y_s.inverse_transform(yhat)y_expected=y_s.inverse_transform(y) x_original=x_s.inverse_transform(x)#print(y_predicted[0:10,].tolist(),x_original[0:10,].tolist())print("Printingthefirst10,predicted,expected,andx")foriinrange(10):    print(y_predicted[i],y_expected[i],x_original[i])  print("let'strysomeotherarbitrarysection,say10:20")#print(y_predicted[9:21,].tolist(),y_predicted[9:21,].tolist(),x_original[9:21,].tolist())print("printing10thto20th,predicted,expected,andx")foriinrange(10):    print(y_predicted[i+10],y_expected[i+10],x_original[i+10]) Theresultingoutput: 5.472537487406726e-06 Printingthefirst10,predicted,expected,andx [1030.0833][0.][0.] [1030.0833][1.][1.] [1030.0833][4.][2.] [1030.0833][9.][3.] [1030.0833][16.][4.] [1030.0833][25.][5.] [1030.0833][36.][6.] [1030.0833][49.][7.] [1030.0833][64.][8.] [1030.0833][81.][9.] let'strysomeotherarbitrarysection,say10:20 printing10thto20th,predicted,expected,andx [1030.0833][100.][10.] [1030.0833][121.][11.] [1030.0833][144.][12.] [1030.0833][169.][13.] [1030.0833][196.][14.] [1030.0833][225.][15.] [1030.0833][256.][16.] [1030.0833][289.][17.] [1030.0833][324.][18.] [1030.0833][361.][19.] 123456789101112131415161718192021222324 5.472537487406726e-06Printingthefirst10,predicted,expected,andx[1030.0833][0.][0.][1030.0833][1.][1.][1030.0833][4.][2.][1030.0833][9.][3.][1030.0833][16.][4.][1030.0833][25.][5.][1030.0833][36.][6.][1030.0833][49.][7.][1030.0833][64.][8.][1030.0833][81.][9.]let'strysomeotherarbitrarysection,say10:20printing10thto20th,predicted,expected,andx[1030.0833][100.][10.][1030.0833][121.][11.][1030.0833][144.][12.][1030.0833][169.][13.][1030.0833][196.][14.][1030.0833][225.][15.][1030.0833][256.][16.][1030.0833][289.][17.][1030.0833][324.][18.][1030.0833][361.][19.] WhenIplotted(x,yhat)and(x,f(x)),theplotwasasexpected.BUTwhenIrescaledtheyhatback,allthevaluesofunscaledyhatwere1030.0833whichisquiteodd. Why? Thankyou, AnthonyofSydneyNSW Reply AnthonyTheKoala October4,2019at3:31am # DearDrJason, Iprintedtheyhat,andtheywereallthesame. Thisisdespitethattheplotofthescaledvalues(x,yhat)lookedlikeaparabola Note:thisispriortoscaling. #plotrealvspredicted pyplot.plot(x,y,label='y') pyplot.plot(x,yhat,label='yhat') pyplot.legend() print("thegraphisprintedonanotherwindow") pyplot.show() print("Printingtheoutputofthescaledvaluesofyhat,f(x)andx") print("printingthefirst10") foriinrange(10): print(yhat[i],y[i],x[i]) print("printingthe10thto20th") foriinrange(10): print(yhat[i+10],y[i+10],x[i+10]) 1234567891011121314 #plotrealvspredictedpyplot.plot(x,y,label='y')pyplot.plot(x,yhat,label='yhat')pyplot.legend()print("thegraphisprintedonanotherwindow")pyplot.show() print("Printingtheoutputofthescaledvaluesofyhat,f(x)andx")print("printingthefirst10")foriinrange(10):    print(yhat[i],y[i],x[i])print("printingthe10thto20th")foriinrange(10):    print(yhat[i+10],y[i+10],x[i+10]) Yetdespitetheexpectedplotsofscaledvalues(x,yhat),and(x,y),yhat’svaluesarethesame Printingtheoutputofthescaledvalues printingthefirst10 [0.00117336][0.][0.] [0.00117336][1.002003e-06][0.001001] [0.00117336][4.00801202e-06][0.002002] [0.00117336][9.01802704e-06][0.003003] [0.00117336][1.60320481e-05][0.004004] [0.00117336][2.50500751e-05][0.00500501] [0.00117336][3.60721081e-05][0.00600601] [0.00117336][4.90981472e-05][0.00700701] [0.00117336][6.41281923e-05][0.00800801] [0.00117336][8.11622433e-05][0.00900901] printingthe10thto20th [0.00117336][0.0001002][0.01001001] [0.00117336][0.00012124][0.01101101] [0.00117336][0.00014429][0.01201201] [0.00117336][0.00016934][0.01301301] [0.00117336][0.00019639][0.01401401] [0.00117336][0.00022545][0.01501502] [0.00117336][0.00025651][0.01601602] [0.00117336][0.00028958][0.01701702] [0.00117336][0.00032465][0.01801802] [0.00117336][0.00036172][0.01901902] 1234567891011121314151617181920212223 Printingtheoutputofthescaledvaluesprintingthefirst10[0.00117336][0.][0.][0.00117336][1.002003e-06][0.001001][0.00117336][4.00801202e-06][0.002002][0.00117336][9.01802704e-06][0.003003][0.00117336][1.60320481e-05][0.004004][0.00117336][2.50500751e-05][0.00500501][0.00117336][3.60721081e-05][0.00600601][0.00117336][4.90981472e-05][0.00700701][0.00117336][6.41281923e-05][0.00800801][0.00117336][8.11622433e-05][0.00900901]printingthe10thto20th[0.00117336][0.0001002][0.01001001][0.00117336][0.00012124][0.01101101][0.00117336][0.00014429][0.01201201][0.00117336][0.00016934][0.01301301][0.00117336][0.00019639][0.01401401][0.00117336][0.00022545][0.01501502][0.00117336][0.00025651][0.01601602][0.00117336][0.00028958][0.01701702][0.00117336][0.00032465][0.01801802][0.00117336][0.00036172][0.01901902] Idon’tgetit.Youwouldexpectasimilarityofyhatandf(x). Iwouldappreciatearesponse Thankyou, AnthonyofSydney Reply JasonBrownlee October4,2019at5:49am # Sorry,Idon’thavethecapacitytodebugyourexamplesfurther.Ihopethatyoucanunderstand. Reply AnthonyTheKoala October4,2019at6:36am # DearDrJason, Iaskedthequestionathttps://datascience.stackexchange.com/questions/61223/reconstituting-estimated-predicted-values-to-original-scale-from-minmaxscalerandhopethatthereisananswer. Thanks AnthonyOfSydney Reply JasonBrownlee October4,2019at8:35am # Hereisthesolution #fitanmlponxvsx^2 fromsklearn.preprocessingimportMinMaxScaler fromkeras.modelsimportSequential fromkeras.layersimportDense fromnumpyimportasarray frommatplotlibimportpyplot #definedata x=asarray([iforiinrange(1000)]) y=asarray([a**2forainx]) #reshapeintorowsandcols x=x.reshape((len(x),1)) y=y.reshape((len(y),1)) #scaledata x_s=MinMaxScaler() x=x_s.fit_transform(x) y_s=MinMaxScaler() y=y_s.fit_transform(y) #fitamodel model=Sequential() model.add(Dense(10,input_dim=1,activation='relu')) model.add(Dense(1)) model.compile(loss='mse',optimizer='adam') model.fit(x,y,epochs=150,batch_size=10,verbose=0) mse=model.evaluate(x,y,verbose=0) print(mse) #predict yhat=model.predict(x) #inversetransforms x=x_s.inverse_transform(x) y=y_s.inverse_transform(y) yhat=y_s.inverse_transform(yhat) #plotrealvspredicted pyplot.plot(x,y,label='y') pyplot.plot(x,yhat,label='yhat') pyplot.legend() pyplot.show() 123456789101112131415161718192021222324252627282930313233343536 #fitanmlponxvsx^2fromsklearn.preprocessingimportMinMaxScalerfromkeras.modelsimportSequentialfromkeras.layersimportDensefromnumpyimportasarrayfrommatplotlibimportpyplot#definedatax=asarray([iforiinrange(1000)])y=asarray([a**2forainx])#reshapeintorowsandcolsx=x.reshape((len(x),1))y=y.reshape((len(y),1))#scaledatax_s=MinMaxScaler()x=x_s.fit_transform(x)y_s=MinMaxScaler()y=y_s.fit_transform(y)#fitamodelmodel=Sequential()model.add(Dense(10,input_dim=1,activation='relu'))model.add(Dense(1))model.compile(loss='mse',optimizer='adam')model.fit(x,y,epochs=150,batch_size=10,verbose=0)mse=model.evaluate(x,y,verbose=0)print(mse)#predictyhat=model.predict(x)#inversetransformsx=x_s.inverse_transform(x)y=y_s.inverse_transform(y)yhat=y_s.inverse_transform(yhat)#plotrealvspredictedpyplot.plot(x,y,label='y')pyplot.plot(x,yhat,label='yhat')pyplot.legend()pyplot.show() Thethreemissinglineswere: #inversetransforms x=x_s.inverse_transform(x) y=y_s.inverse_transform(y) yhat=y_s.inverse_transform(yhat) 1234 #inversetransformsx=x_s.inverse_transform(x)y=y_s.inverse_transform(y)yhat=y_s.inverse_transform(yhat) Reply AnthonyTheKoala October4,2019at9:59am # DearDrJason, IamcomingtotheconclusionthattheremustbeabugNOTinyoursolutionandneitherinmysolution.Ithinkitiscomingfromabuginthelowerimplementationofthelanguage. Iprintedthescaledversionofyhat,f(x)actualandxandgotthis. NOTEthevaluesarethesameforthescaledversionofyhat. Thatis: model.fit(x,y,epochs=277,batch_size=200,verbose=0) mse=model.evaluate(x,y,verbose=0) print("thevalueofthemse") print(mse) #predict yhat=model.predict(x) 123456 model.fit(x,y,epochs=277,batch_size=200,verbose=0)mse=model.evaluate(x,y,verbose=0)print("thevalueofthemse")print(mse)#predictyhat=model.predict(x) DESPITEthesuccessfulplotof(x,yhat)and(x,f(x), theresultingoutputofthefirst10ofthescaledoutputofyhatisthesame, ThatiswewouldgetaFLATLINEifweplotted(x,yhat),BUTTHEPLOTWASAPARABOLA. [0.00161531][0.][0.] [0.00161531][1.002003e-06][0.001001] [0.00161531][4.00801202e-06][0.002002] [0.00161531][9.01802704e-06][0.003003] [0.00161531][1.60320481e-05][0.004004] [0.00161531][2.50500751e-05][0.00500501] [0.00161531][3.60721081e-05][0.00600601] [0.00161531][4.90981472e-05][0.00700701] [0.00161531][6.41281923e-05][0.00800801] [0.00161531][8.11622433e-05][0.00900901] 12345678910 [0.00161531][0.][0.][0.00161531][1.002003e-06][0.001001][0.00161531][4.00801202e-06][0.002002][0.00161531][9.01802704e-06][0.003003][0.00161531][1.60320481e-05][0.004004][0.00161531][2.50500751e-05][0.00500501][0.00161531][3.60721081e-05][0.00600601][0.00161531][4.90981472e-05][0.00700701][0.00161531][6.41281923e-05][0.00800801][0.00161531][8.11622433e-05][0.00900901] Whenwedidthefollowingtransforms: x=x_s.inverse_transform(x) y=y_s.inverse_transform(y) yhat=y_s.inverse_transform(yhat) 123 x=x_s.inverse_transform(x)y=y_s.inverse_transform(y)yhat=y_s.inverse_transform(yhat) WESTILLGOTTHESAMEFAULTFORTHEUNSCALEDVALUESofyhat.The2ndcolumnisf(x)andthirdcolumnisx. [1612.0857][0.][0.] [1612.0857][1.][1.] [1612.0857][4.][2.] [1612.0857][9.][3.] [1612.0857][16.][4.] [1612.0857][25.][5.] [1612.0857][36.][6.] [1612.0857][49.][7.] [1612.0857][64.][8.] [1612.0857][81.][9.] 12345678910 [1612.0857][0.][0.][1612.0857][1.][1.][1612.0857][4.][2.][1612.0857][9.][3.][1612.0857][16.][4.][1612.0857][25.][5.][1612.0857][36.][6.][1612.0857][49.][7.][1612.0857][64.][8.][1612.0857][81.][9.] Conclusion:Itisnotaprogrammaticalbugineitheryoursolutionormysolution.Ibelieveitmaybealowerimplementationproblem. WhyamI‘persistent’inthismatter:becauseincaseIhavemorecomplexmodelsIwanttoseethepredicted/yhatvaluesthatarere-scaled. Idon’tknowiftherearepeopleatstackexchangewhomayhaveaninsight. Iappreciateyourtime,manyblessingstoyou, AnthonyofSydney Reply JasonBrownlee October6,2019at8:05am # Ibelieveiscorrect,giventhatitisanexponential,themodelhasdecidedthatitcangiveupcorrectnessatthelowendforcorrectnessatthehighend–giventhereductioninMSE. Considerchangingthenumberofexamplesfrom1Kto100,thenreviewall100valuesmanually–you’llseewhatImean. Allofthisisagoodexercise,welldone. Reply AnthonyTheKoala October13,2019at10:57pm # DearDrJason, Ididthisproblemagainandgotverygoodresults! IcannotexplainwhyIgotaccurateresults,whenIexpectedtogetaccurateresults,BUTtheyarecertainlyanimprovement. TherescaledoriginalandfittedvaluesproducedanRMSof0.0. Hereisthecodewithvariablenameschangedslightly. fromsklearn.preprocessingimportMinMaxScaler fromkeras.modelsimportSequential fromkeras.layersimportDense fromnumpyimportsqrt frommatplotlibimportpyplot x=asarray([iforiinrange(100)]) y=asarray([i**2foriinx]) x=x.reshape((len(x),1)) y=y.reshape((len(y),1)) x_s=MinMaxScaler() xscaled=x_s.fit_transform(x) y_s=MinMaxScaler() yscaled=y_s.fit_transform(y) model=Sequential() model.add(Dense(100,input_dim=1,activation='relu') model.add(Dense(1)) model.compile(loss='mse',optimizer='adam') model.fit(xscaled,yscaled,epochs=150,batch_size=10,verbose=0) mse=model.evaluate(xscaled,yscaled,verbose=0) mse 2.9744908551947447e-05 yhat=model.predict(x) yhat_original=y_s.inverse_transform(yscaled) #Firstfiveelementsofpredictedvalues yhat_original[:5].T array([[0.,1.,4.,9.,16.]]) #Firstfiveelementsoforiginaly y[:5].T array([[0,1,4,9,16]]) #Lastfiveelementsoftheoriginalseries. y[-5:].T array([[9025,9216,9409,9604,9801]]) #Lastfiveelementsofpredictedvalues yhat_original[-5:].T array([[9025.,9216.,9409.,9604.,9801.]]) #NowdeterminingtheRMSofthepredictedandoriginalvaluesofy cum_sum=0 foriinrange(len(yhat_original)): cum_sum+=(yoriginal[i]-yhat_original[i])**2/len(yhat_original) rms=sqrt(cum_sum) rms[0] 0.0 #Plottingtherescaledoriginalandrescaledyhat pyplot.plot(xoriginal,yoriginal,label='y') pyplot.plot(xoriginal,yhat_original,label='fitted') pyplot.legend() pyplot.show() 1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556 fromsklearn.preprocessingimportMinMaxScalerfromkeras.modelsimportSequentialfromkeras.layersimportDensefromnumpyimportsqrtfrommatplotlibimportpyplot x=asarray([iforiinrange(100)])y=asarray([i**2foriinx])x=x.reshape((len(x),1))y=y.reshape((len(y),1)) x_s=MinMaxScaler()xscaled=x_s.fit_transform(x)y_s=MinMaxScaler()yscaled=y_s.fit_transform(y) model=Sequential()model.add(Dense(100,input_dim=1,activation='relu')model.add(Dense(1))model.compile(loss='mse',optimizer='adam')model.fit(xscaled,yscaled,epochs=150,batch_size=10,verbose=0) mse=model.evaluate(xscaled,yscaled,verbose=0)mse2.9744908551947447e-05 yhat=model.predict(x)yhat_original=y_s.inverse_transform(yscaled) #Firstfiveelementsofpredictedvaluesyhat_original[:5].Tarray([[0.,  1.,  4.,  9.,16.]])#Firstfiveelementsoforiginalyy[:5].Tarray([[0,  1,  4,  9,16]]) #Lastfiveelementsoftheoriginalseries.y[-5:].Tarray([[9025,9216,9409,9604,9801]])#Lastfiveelementsofpredictedvaluesyhat_original[-5:].Tarray([[9025.,9216.,9409.,9604.,9801.]])        #NowdeterminingtheRMSofthepredictedandoriginalvaluesofycum_sum=0foriinrange(len(yhat_original)): cum_sum+=(yoriginal[i]-yhat_original[i])**2/len(yhat_original)rms=sqrt(cum_sum)rms[0]0.0 #Plottingtherescaledoriginalandrescaledyhatpyplot.plot(xoriginal,yoriginal,label='y')pyplot.plot(xoriginal,yhat_original,label='fitted')pyplot.legend()pyplot.show() Itworks,therescaledyhatisasexpectedbutcannotexplainwhyitwas“cuckoo”,intheprevious.Moreexperimentationonthis. Nevertheless,mynextprojectisk-foldssamplingonadeterministicfunctiontoseeifthegapsintheresampleddatafoldwillgiveusanaccuratepredictiondespitetherandomsamplingineachfold. Thankyou, AnthonyofSydney Reply AnthonyTheKoala October13,2019at11:44pm # DearDrJason, Apologies,IthoughttheRMSwas‘unrealistic’.Ihadaprogrammingerror. Nevertheless,Ididitagain,andstillproducedresultswhichlookedpleasing. x=x.reshape((len(x),1)) y=y.reshape((len(y),1)) x_s=MinMaxScaler() y_s=MinMaxScaler() x_scaled=x_s.fit_transform(x) y_scaled=y_s.fit_transform(y) model=Sequential() model.add(Dense(100,input_dim=1,activation='relu')) model.add(Dense(1)) model.compile(loss='mse',optimizer='adam') model.fit(x_scaled,y_scaled,epochs=100,batch_size=10,verbose=0) mse=model.evaluate(x_scaled,y_scaled,verbose=0) mse 1.0475558547113905e-05 yhat=model.predict(x_scaled) yhat_original=y_s.inverse_transform(yhat) #Firstfiveofyhat_original(yhatrescaled) yhat_original[:5].T array([[11.835742,11.835742,11.835742,11.835742,11.835742]] #comparedtofirstoriginal5elementsofy=0,1,4,9,16 #Lastfiveofyhat_original(yhatrescaled) yhat_original[-5:].T array([[8985.839,9154.454,9323.067,9491.684,9660.3] #comparedtolastoriginal5elementsofy=9025,9216,9409,9604,9801 #NowdeterminetheRMSofthepredictedandoriginalvalues cum_sum=0 foriinrange(len(yhat_original)): cum_sum+=(y[i]-yhat_original[i])**2/len(yhat_original) mse=sqrt(cum_sum) mse array([31.72189417]) pyplot.plot(x,y,label='y') pyplot.plot(x,yhat_original,label='estimated') pyplot.legend() pyplot.show() 1234567891011121314151617181920212223242526272829303132333435363738394041 x=x.reshape((len(x),1))y=y.reshape((len(y),1))x_s=MinMaxScaler()y_s=MinMaxScaler()x_scaled=x_s.fit_transform(x)y_scaled=y_s.fit_transform(y)model=Sequential()model.add(Dense(100,input_dim=1,activation='relu'))model.add(Dense(1)) model.compile(loss='mse',optimizer='adam')model.fit(x_scaled,y_scaled,epochs=100,batch_size=10,verbose=0) mse=model.evaluate(x_scaled,y_scaled,verbose=0)mse1.0475558547113905e-05 yhat=model.predict(x_scaled)yhat_original=y_s.inverse_transform(yhat) #Firstfiveofyhat_original(yhatrescaled)yhat_original[:5].Tarray([[11.835742,11.835742,11.835742,11.835742,11.835742]]#comparedtofirstoriginal5elementsofy=0,1,4,9,16 #Lastfiveofyhat_original(yhatrescaled)yhat_original[-5:].Tarray([[8985.839,9154.454,9323.067,9491.684,9660.3  ]#comparedtolastoriginal5elementsofy=9025,9216,9409,9604,9801 #NowdeterminetheRMSofthepredictedandoriginalvaluescum_sum=0foriinrange(len(yhat_original)): cum_sum+=(y[i]-yhat_original[i])**2/len(yhat_original)mse=sqrt(cum_sum)msearray([31.72189417])pyplot.plot(x,y,label='y')pyplot.plot(x,yhat_original,label='estimated')pyplot.legend()pyplot.show() Insum,therescaledyhatproducedresultsclosertotheoriginalvalues.Thelowervaluesofyhatrescaledappeartobeodd. Despitethatthevaluesneedtobemorerealisticatthebottomendeventhoughtheplotoftherescaledx&rescaledy,andrescaledxandrescaledyhatlookclose. Moreinvestigationsneededonthebatchsize,epochsandoptimizers. Next,todok-foldssamplingonadeterministicfunctiontoseeifthegapsintheresampleddatafoldwillgiveusanaccuratepredictiondespitetherandomsamplingineachfold. Againapologiesforthemistakeinthepreviouspost. AnthonyofSydney Reply JasonBrownlee October14,2019at8:08am # Welldone. Reply AnthonyTheKoala November21,2019at5:03am # DearDrJason, Aperson‘Serali’aparticlephysicistreliedtomeat“StackExchange”repliedandsuggestedthatIshuffletheoriginaldata.Theshufflingofdatainthiscontexthasnothingtodowiththeshufflingink-folds.Accordingtothecontributor,theresultsshouldimprove.Sourcehttps://datascience.stackexchange.com/questions/61223/reconstituting-estimated-predicted-values-to-original-scale-from-minmaxscaler ThecodeisexactlythesameaswhatIwasexperimentingwith.SoIwillshowthenecessarycodetoshuffeatthestartandde-shuffleattheend. Shufflingcodeatthebeginning: fromsklearn.preprocessingimportMinMaxScaler fromkeras.modelsimportSequential fromkeras.layersimportDense fromnumpyimportasarray fromnumpyimportsqrt frommatplotlibimportpyplot fromnumpy.randomimportseed fromnumpy.randomimportshuffle fromnumpy.randomimportsample importnumpyasnp #Wewillwanttoreshufflethedata x=[iforiinrange(100)] y=[i**2foriinx] xfx=np.vstack((x,y)).T xy=xfx shuffle(xy) #x=asarray([iforiinrange(100)]) #y=asarray([i**2foriinx]) #x=asarray(xy[:,0]);#x.reshape((len(x),1)) x=np.reshape(xy[:,0],(100,1)) #print('debug,sizex=%d'+str(np.shape(x))) y=np.reshape(xy[:,1],(100,1)) #y=asarray(xy[:,1]);#y.reshape((len(y),1)) #print('debug,sizey=%d'+str(np.shape(y))) x_s=MinMaxScaler() y_s=MinMaxScaler() x_scaled=x_s.fit_transform(x) y_scaled=y_s.fit_transform(y) #Therestisfedintomodel ...... ....... yhat=model.predict(x_scaled) yhat_original=y_s.inverse_transform(yhat) 123456789101112131415161718192021222324252627282930313233343536373839 fromsklearn.preprocessingimportMinMaxScalerfromkeras.modelsimportSequentialfromkeras.layersimportDensefromnumpyimportasarrayfromnumpyimportsqrtfrommatplotlibimportpyplot fromnumpy.randomimportseedfromnumpy.randomimportshufflefromnumpy.randomimportsample importnumpyasnp #Wewillwanttoreshufflethedatax=[iforiinrange(100)]y=[i**2foriinx] xfx=np.vstack((x,y)).Txy=xfxshuffle(xy) #x=asarray([iforiinrange(100)])#y=asarray([i**2foriinx]) #x=asarray(xy[:,0]);#x.reshape((len(x),1))x=np.reshape(xy[:,0],(100,1))#print('debug,sizex=%d'+str(np.shape(x)))y=np.reshape(xy[:,1],(100,1))#y=asarray(xy[:,1]);#y.reshape((len(y),1))#print('debug,sizey=%d'+str(np.shape(y)))x_s=MinMaxScaler()y_s=MinMaxScaler()x_scaled=x_s.fit_transform(x)y_scaled=y_s.fit_transform(y)#Therestisfedintomodel.............yhat=model.predict(x_scaled)yhat_original=y_s.inverse_transform(yhat) Theendcodewas‘unshuffled’/sortedinordertodisplaythedifferencebetweentheactualandpredicted. #Plottingdotsinsteadoflineplototherwisewegetazig-zagplot pyplot.plot(x,y,'r.',label='y') pyplot.plot(x,yhat_original,'b.',label='estimated') #printingthefirstvalues-wehavetosortthevaluesinordertoseethemin #theirpropercontext. xy=np.vstack((x[:,0],y[:,0])).T xyhat=np.vstack((x[:,0],yhat_original[:,0])).T xyy=np.sort(xy,axis=0) xyhatt=np.sort(xyhat,axis=0) print("printingx,y,yhat") forloopinrange(10): print(xyy[loop,0],xyy[loop,1],xyhatt[loop,1]) pyplot.legend() pyplot.show() 12345678910111213141516171819 #Plottingdotsinsteadoflineplototherwisewegetazig-zagplotpyplot.plot(x,y,'r.',label='y')pyplot.plot(x,yhat_original,'b.',label='estimated') #printingthefirstvalues-wehavetosortthevaluesinordertoseethemin#theirpropercontext.xy=np.vstack((x[:,0],y[:,0])).Txyhat=np.vstack((x[:,0],yhat_original[:,0])).T xyy=np.sort(xy,axis=0)xyhatt=np.sort(xyhat,axis=0) print("printingx,y,yhat")forloopinrange(10):        print(xyy[loop,0],xyy[loop,1],xyhatt[loop,1])  pyplot.legend()pyplot.show() Hereisalistingofx,f(x)andyhat printingx,y,yhat 001.4915295839309692 112.66086745262146 244.75526237487793 399.125076293945312 41615.723174095153809 52524.287418365478516 63635.04938507080078 74947.73912811279297 86462.95930480957031 98180.16889190673828 1234567891011 printingx,y,yhat001.4915295839309692112.66086745262146244.75526237487793399.12507629394531241615.72317409515380952524.28741836547851663635.0493850708007874947.7391281127929786462.9593048095703198180.16889190673828 Thingstoimprove: *adjustingthenumberoflayers. *adjustinghowmanyneuronsineachlayer *adjustingthebatchsize *adjustingtheepochsize Inaddition *lookatk-foldsforfurthermodelrefinement. Thankyou AnthonyofSydney AnthonyTheKoala November24,2019at4:12pm # DearDrJason, Hereisanevenimprovedversionwithverycloseresults. InsteadofMinMaxScaler,Itookthelogs(tothebasee)oftheinputsxandf(x)appliedmymodel,thenretransformedmymodeltoitsoriginalvalues. Snippetsofcodetransformingthedata #Wewillwanttoreshufflethedata x=[iforiinrange(100)] y=[i**2foriinx] xfx=np.vstack((x,y)).T xy=xfx seed(1) shuffle(xy) x=np.reshape(xy[:,0],(100,1)) print('debug,sizex=%d'+str(np.shape(x)))#shapeis(100,1) y=np.reshape(xy[:,1],(100,1)) print('debug,sizey=%d'+str(np.shape(y))) #x_s=MinMaxScaler() #x_s=MinMaxScaler(feature_range=(0,200)) #y_s=MinMaxScaler() #x_scaled=x_s.fit_transform(x) #y_scaled=y_s.fit_transform(y) x_scaled=np.log(x+1);#weadd1soasnottohaveanerroraslog(0)producesanerror y_scaled=np.log(y+1);#weadd1soasnottohaveanerroraslog(0)producesanerror model=Sequential() ...#themodelisappliedonthetransformeddata 123456789101112131415161718192021222324 #Wewillwanttoreshufflethedatax=[iforiinrange(100)]y=[i**2foriinx] xfx=np.vstack((x,y)).Txy=xfxseed(1)shuffle(xy)x=np.reshape(xy[:,0],(100,1))print('debug,sizex=%d'+str(np.shape(x)))#shapeis(100,1)y=np.reshape(xy[:,1],(100,1)) print('debug,sizey=%d'+str(np.shape(y)))#x_s=MinMaxScaler()#x_s=MinMaxScaler(feature_range=(0,200))#y_s=MinMaxScaler()#x_scaled=x_s.fit_transform(x)#y_scaled=y_s.fit_transform(y)x_scaled=np.log(x+1);#weadd1soasnottohaveanerroraslog(0)producesanerrory_scaled=np.log(y+1);#weadd1soasnottohaveanerroraslog(0)producesanerror model=Sequential()...#themodelisappliedonthetransformeddata The #Weneedtoresortthenumbers #inordertoprintthefirst10values xy=np.vstack((x[:,0],y[:,0])).T xyhat=np.vstack((x[:,0],yhat_original[:,0])).T xyy=np.sort(xy,axis=0) xyhatt=np.sort(xyhat,axis=0) print("printingx,y,yhat") forloopinrange(10): print(xyy[loop,0],xyy[loop,1],xyhatt[loop,1]) #wanttopredictforthevalues100and200 Xnew=np.reshape([100,200],(2,1)) print("let'spredictforvalues100and200") print("thevaluesofx=Xnewbeforetransform%s,%s"%(Xnew[0],Xnew[1])) Xnew=np.log(Xnew+1) print("valuesofscaledxnewtoputintothemodel%s,%s"%(Xnew[0],Xnew[1])) ynew=model.predict(Xnew) #Re-transformtheoriginalvalues ynew=np.exp(ynew)-1 print("ThevaluesofXnewanditspredictedyhat") forloopinrange(len(Xnew)): print("Xnew[%s]=%s,ynew[%s]=%s"%(loop,Xnew[loop],loop,ynew[loop])) 1234567891011121314151617181920212223242526 #Weneedtoresortthenumbers#inordertoprintthefirst10valuesxy=np.vstack((x[:,0],y[:,0])).Txyhat=np.vstack((x[:,0],yhat_original[:,0])).T xyy=np.sort(xy,axis=0)xyhatt=np.sort(xyhat,axis=0)print("printingx,y,yhat")forloopinrange(10):        print(xyy[loop,0],xyy[loop,1],xyhatt[loop,1]) #wanttopredictforthevalues100and200Xnew=np.reshape([100,200],(2,1)) print("let'spredictforvalues100and200")print("thevaluesofx=Xnewbeforetransform%s,%s"%(Xnew[0],Xnew[1])) Xnew=np.log(Xnew+1)print("valuesofscaledxnewtoputintothemodel%s,%s"%(Xnew[0],Xnew[1]))ynew=model.predict(Xnew) #Re-transform  theoriginalvaluesynew=np.exp(ynew)-1print("ThevaluesofXnewanditspredictedyhat")forloopinrange(len(Xnew)):        print("Xnew[%s]=%s,ynew[%s]=%s"%(loop,Xnew[loop],loop,ynew[loop])) Theresultingoutput:Notehowclosetheactualf(x)istothepredictedf(x) printingx,y,yhat 000.00208890438079834 110.9818048477172852 244.111057281494141 399.025933265686035 41615.918327331542969 52524.944564819335938 63636.00426483154297 74949.05435562133789 86463.969764709472656 98180.93276977539062 let'spredictforvalues100and200 thevaluesofx=Xnewbeforetransform[100],[200] valuesofscaledxnewtoputintothemodel[4.61512052],[5.30330491] ThevaluesofXnewanditspredictedyhat Xnew[0]=[100.],ynew[0]=[10008.037] Xnew[1]=[200.],ynew[1]=[40082.062] 12345678910111213141516171819 printingx,y,yhat000.00208890438079834110.9818048477172852244.111057281494141399.02593326568603541615.91832733154296952524.94456481933593863636.0042648315429774949.0543556213378986463.96976470947265698180.93276977539062 let'spredictforvalues100and200thevaluesofx=Xnewbeforetransform[100],[200]valuesofscaledxnewtoputintothemodel[4.61512052],[5.30330491] ThevaluesofXnewanditspredictedyhatXnew[0]=[100.],ynew[0]=[10008.037]Xnew[1]=[200.],ynew[1]=[40082.062] JasonBrownlee November25,2019at6:21am # Nicework. kamu October6,2019at7:51pm # HiJason, Thankyouverymuchfor“YourFirstDeepLearningProjectinPythonwithKerasStep-By-Step”tutorial.Itisveryusefulforme.Iwanttoaskyou: CanIcode: model.add(Dense(8))#inputlayer model.add(Dense(12,activation=’relu’))#firsthiddenlayer Insteadof: model.add(Dense(12,input_dim=8,activation=’relu’))#inputlayerandfirsthiddenlayer Sincerely. Reply JasonBrownlee October7,2019at8:29am # No. Theinput_dimargumentdefinestheinputlayer. Reply keryums October17,2019at1:22am # HiJason,isitnotnecessarytousethekerasutilility‘to_categorical’toconvertyouryvectorintoamatrixbeforefittingthemodel? Reply JasonBrownlee October17,2019at6:37am # Youcan,oryoucanusethesklearntoolstodothesamething. Reply AquillaSetiawanKanadi October17,2019at6:35am # HiJason, Thanksalotforyourtutorialaboutdeeplearningproject,itreallyhelpmealotinmyjourneytolearnmachinelearning. Ihaveaquestionaboutthedatasplittingincodeabove,howisthesplittingworkbetweendatafortrainingandthedataforvalidatethetrainingdata?I’vetriedtoreadyourtutorialaboutthedatasplittingbutihavenoideasaboutthedatasplittingworkabove. Thankyou, Aquilla Reply JasonBrownlee October17,2019at6:47am # Wedidnotsplitthedata,wefitandevaluatedononeset.Wedidthisforbrevity. Reply Loveyourwork! October17,2019at11:43am # HiJason, Ijustwantedtothankyou.Thistutorialisincrediblyclearandwellpresented.Unlikemanyotheronlinetutorialsyouexplainveryeloquentlytheintuitionbehindthelinesofcodeandwhatisbeingaccomplishedwhichisveryuseful.AssomeonejuststartingoutwithKerasIhadbeenfindingsomeofthecoding,aswellashowKerasandTensorflowinteract,confusing.AfteryourexplanationsKerasseemsincrediblybasic.I’vebeenlookingoversomeofmyrecentcodefromotherKerastutorialsandInowunderstandhoweverythingworks. Thanksagain! Reply JasonBrownlee October17,2019at1:50pm # Welldoneonyourprogressandthanksforyoursupport! Reply Ahmed October19,2019at6:16am # DearJason.Iamdeeplygratefultothisamazingwork.Everythingworkswellsofar.KingRegards Reply JasonBrownlee October19,2019at6:55am # Thanks,welldoneonyourprogress! Reply JAMESJONAH October28,2019at10:56am # Pleaseineedhelp,whichalgorithmsisthebestincyberthreatdetectionandhowtoimplementit.thanks Reply JasonBrownlee October28,2019at1:18pm # ThisisacommonquestionthatIanswerhere: https://machinelearningmastery.com/faq/single-faq/what-algorithm-config-should-i-use Reply shivan October29,2019at7:01am # hellosir doyouhaveanimplementationabout(medicalimageanalysiswithdeeplearning). ineedtostartwithmedicalimageNOTrealworldimage thanksforyourhelp. Reply JasonBrownlee October29,2019at1:47pm # Notreally,sorry. Reply shivan October31,2019at9:18am # so,whatdoyourecommendmeaboutit thanks. Reply JasonBrownlee October31,2019at1:36pm # Perhapsstartbycollectingadataset. Thenconsiderreviewingtheliteraturetoseewhattypesofdataprepandmodelsotherhaveusedforsimilardata. Reply NasirShah October30,2019at7:27am # Sir.iamnewtoneuralnetwork.sofromwhereistartit.orwhichtutorialiwatch.ididn’thaveanyideaaboutit. Reply JasonBrownlee October30,2019at1:55pm # Yes,youcanstarthere: https://machinelearningmastery.com/start-here/#deeplearning Reply himahansi November3,2019at1:35pm # hellosir,I’mnewtothisfield.I’mgoingtodevelopmonophonicmusicalinstrumentclassificationsystemusingpythonandKeras.sir,Iwanttofindmonophonicdataset,howcanIfindit. Itrytogetpianomusicfromyoutubeandconvertitto.wawfileandsplittingit.Isitagoodorbad?oranothermethodsavailabletogetfreedatasetontheweb..giveyoursuggestionsplease?? Reply JasonBrownlee November4,2019at6:37am # Perhapsthiswillhelp: https://machinelearningmastery.com/faq/single-faq/where-can-i-get-a-dataset-on-___ Reply MonaAhmed November20,2019at3:14am # igotscore76.69 Reply JasonBrownlee November20,2019at6:20am # Welldone! Reply NiallXie November26,2019at8:26am # Hello,IjustwanttosaythatIamelatedtouseyourtutorial.So,IamworkingonagroupprojectwithmyteamandIuseddatasetsrepresentingheartdisease,diabetesandbreastcancerforthistutorial.However,thiscodeexamplewillgiveanerrorwhenthecellcontainsastringvalue,inthiscase…titlenameslikeclump_thickessand?willproduceanerror.howdoIfixthis? Reply JasonBrownlee November26,2019at1:28pm # Thanks. Perhapstryencodingyourcategoriesusingaonehotencodingfirst: https://machinelearningmastery.com/how-to-prepare-categorical-data-for-deep-learning-in-python/ Reply Mohamed November28,2019at10:46pm # thankyousirforthisarticle,wouldyoupleasesuggestanexamplewithtestingdata? Reply JasonBrownlee November29,2019at6:49am # SorryIdon’tunderstandyourquestion,canyouelaborate? Reply Chris December3,2019at10:49pm # Ibelievethereissomethingwrongwiththe(150/10)15updatestothemodelweights.Theinternalcoefficientsareupdatedaftereverysinglebatch.Ourdataiscomprisedof768samples.Sincebatch_size=10,weobtain77batches(76with10samplesandonewith8).Therefore,ateachepochweshouldsee77updatesofweightsandcoefficientsandnot15.Moreover,thetotalnumberofupdatesmustbe:150*77=11550.AmImissingsomethingimportant? Reallygoodjobandverywell-writtenarticle(allyourarticles).Keepupthegoodjob.Cheers Reply JasonBrownlee December4,2019at5:37am # You’reright.NotsurewhatIwasthinkingthere.Simplified. Reply Justine December14,2019at9:58am # Thanks!Thisismyfirstforayintokeras,andthetutorialwentswimmingly.Amnowtrainingonmyowndata.Itisnotperformingworsethanonmyothermachinelearningmodels(that’sawin:). Reply JasonBrownlee December15,2019at6:02am # Welldone! Reply x December17,2019at8:37am # Hi,Jason.Thankssomuchforyouranswer.NowmyquestioniswhyIcan’tfoundmydirectoryinJupyterandputthe‘pima-indians-diabetes.csv’init. OSErrorTraceback(mostrecentcalllast) in 4fromkeras.layersimportDense 5#loadthedataset —->6dataset=loadtxt(‘pima-indians-diabetes.csv’,delimiter=’,’) 7#splitintoinput(X)andoutput(y)variables 8X=dataset[:,0:8] D:\anaconda\lib\site-packages\numpy\lib\npyio.pyinloadtxt(fname,dtype,comments,delimiter,converters,skiprows,usecols,unpack,ndmin,encoding,max_rows) 966fname=os_fspath(fname) 967if_is_string_like(fname): –>968fh=np.lib._datasource.open(fname,‘rt’,encoding=encoding) 969fencoding=getattr(fh,‘encoding’,‘latin1’) 970fh=iter(fh) D:\anaconda\lib\site-packages\numpy\lib\_datasource.pyinopen(path,mode,destpath,encoding,newline) 267 268ds=DataSource(destpath) –>269returnds.open(path,mode,encoding=encoding,newline=newline) 270 271 D:\anaconda\lib\site-packages\numpy\lib\_datasource.pyinopen(self,path,mode,encoding,newline) 621encoding=encoding,newline=newline) 622else: –>623raiseIOError(“%snotfound.”%path) 624 625 OSError:pima-indians-diabetes.csvnotfound. Reply JasonBrownlee December17,2019at1:36pm # Perhapstryrunningthecodefilefromthecommandline,asfollows: https://machinelearningmastery.com/faq/single-faq/how-do-i-run-a-script-from-the-command-line Reply ManoharNookala December22,2019at9:32pm # Hisir, Mynameismanohar.itrainedadeeplearningmodeloncarpriceprediction.igot loss:nan–acc:0.0000e+00.ifyougivemeyouremailIDtheniwillsendyou.youcantellmetheproblem.pleasedothishelpbecauseiamabeginner. Reply JasonBrownlee December23,2019at6:48am # Perhapsyouneedtoscalethedatapriortofitting? Perhapsyouneedtousereluactivation? Perhapsyouneedsometypeofregularization? Perhapsyouneedalargerorsmallermodel? Reply ShoneXu January5,2020at1:24am # HiJason, thanksanditisagreattutorial.just1question.dowehavetotrainthemodelby“model.fit(x,y,epochs=150,batch_size=10)”everytimebeforemakingthepredictionbecauseittakesaverylongtimetotrainthemodel.Iamjustwonderingwhetheritispossibletosavethetrainedmodelandgostraighttothepredictionskippingthemodel.fit(eg:pickle)? manythanksforyouradviceinadvance cheers Reply JasonBrownlee January5,2020at7:06am # No,youcanfitthemodelonce,thensaveit: https://machinelearningmastery.com/save-load-keras-deep-learning-models/ Thenlaterloaditandmakepredictions. Reply ShoneXu January7,2020at2:16pm # Thanksandwillcheckitout Reply ustengg January8,2020at7:40pm # ThankyousomuchforthistutorialsirbutHowcanIusethemodeltopredictusingdataoutsidethedataset? Reply JasonBrownlee January9,2020at7:24am # Callmodel.predict()withthenewinputs. Seethe“MakePredictions”section. Reply ustengg January9,2020at4:07pm # Nice!Thankyousomuch,Sir.Ifigureditoutusingthelinkonthe“Makepredictions”section.I’velearnedalotfromyourtutorials.You’rethebest! Reply JasonBrownlee January10,2020at7:22am # Nicework! Thanks. Reply monica January23,2020at4:00am # HiJason, Thanksforsharingthispost. Ihaveaquestion,whenItriedtosplitthedataset (X=dataset[:,0:8] y=dataset[:,8]) itgivesmeanerror:TypeError:‘(slice(None,None,None),slice(0,8,None))’isaninvalidkey howcanIfixit? Thanks, monica Reply JasonBrownlee January23,2020at6:41am # Sorrytohearthat,thismighthelp: https://machinelearningmastery.com/faq/single-faq/why-does-the-code-in-the-tutorial-not-work-for-me Reply SamSarjant January23,2020at9:11pm # Thanksforthetutorial!Thisisawonderful‘HelloWorld’toDeepLearning Reply JasonBrownlee January24,2020at7:51am # Thanks,I’mhappyitwashelpful. Reply Keerthan January24,2020at4:01pm # HelloJason!hopeyouaredoinggood. Iamactuallydoingaprojectonclassificationofthyroiddiseaseusingbackpropagationwithstocasticgradientdescentmethod,canyouhelpmeoutwiththecodealittlebit? Reply JasonBrownlee January25,2020at8:31am # Perhapsstartbyadaptingthecodeintheabovetutorial? Reply Shakir January25,2020at1:29am # DearSir Iwanttopredictairpollutionusingdeeplearningtechniquespleasesuggesthowtogoaboutwithmydatasets Reply JasonBrownlee January25,2020at8:39am # Starthere: https://machinelearningmastery.com/start-here/#deep_learning_time_series Reply Yared February7,2020at4:36pm # AttributeError:module‘tensorflow’hasnoattribute‘get_default_graph’AttributeError:module‘tensorflow’hasnoattribute‘get_default_graph’ Reply JasonBrownlee February8,2020at7:05am # PerhapsconfirmyouareusingTF2andKeras2.3. Reply Yared February7,2020at4:41pm # IwenttodetectagreementerrorsinasentenceusingLSTMtechniquespleasesuggesthowtogoaboutwithmydatasets Reply JasonBrownlee February8,2020at7:05am # YoucangetstartedwithNLPproblemshere: https://machinelearningmastery.com/start-here/#nlp Reply PavitraNayak February29,2020at2:55pm # HelloJason Iamusingthiscodeformyproject.Itworksperfectlyforyourdataset.ButIhaveadatasetwhichhastoomany0’sand1’s.SoIamgettingthewrongprediction.WhatcanIdotosolvethisproblem? Reply JasonBrownlee March1,2020at5:22am # Herearesomesuggestions: https://machinelearningmastery.com/improve-deep-learning-performance/ Reply nurul March6,2020at5:50pm # hi.Iwannaask.ihadfollowallthestepsbuti’mstuckatthefitthemodel.Thiserroroccured.HowcanIsolvethisproblem? Reply kiki March6,2020at6:44pm # Ihavealreadytriedthisstepandstuckatthefitphaseandgotthiserror.Doyouhaveanysolutionformyproblem? ————————————————————————— ValueErrorTraceback(mostrecentcalllast) in 1#fitthekerasmodelonthedataset —->2model.fit(x,y,batch_size=10,epochs=150) ~\Anaconda4\lib\site-packages\keras\engine\training.pyinfit(self,x,y,batch_size,epochs,verbose,callbacks,validation_split,validation_data,shuffle,class_weight,sample_weight,initial_epoch,steps_per_epoch,validation_steps,validation_freq,max_queue_size,workers,use_multiprocessing,**kwargs) 1152sample_weight=sample_weight, 1153class_weight=class_weight, ->1154batch_size=batch_size) 1155 1156#Preparevalidationdata. ~\Anaconda4\lib\site-packages\keras\engine\training.pyin_standardize_user_data(self,x,y,sample_weight,class_weight,check_array_lengths,batch_size) 577feed_input_shapes, 578check_batch_axis=False,#Don’tenforcethebatchsize. –>579exception_prefix=’input’) 580 581ifyisnotNone: ~\Anaconda4\lib\site-packages\keras\engine\training_utils.pyinstandardize_input_data(data,names,shapes,check_batch_axis,exception_prefix) 143‘:expected‘+names[i]+‘tohaveshape‘+ 144str(shape)+‘butgotarraywithshape‘+ –>145str(data_shape)) 146returndata 147 ValueError:Errorwhencheckinginput:expecteddense_133_inputtohaveshape(16,)butgotarraywithshape(17,) Reply JasonBrownlee March7,2020at7:15am # Perhapsthiswillhelpyoucopythecodefromthetutorial: https://machinelearningmastery.com/faq/single-faq/how-do-i-copy-code-from-a-tutorial Reply JasonBrownlee March7,2020at7:13am # I’msorrytohearthat,perhapsthiswillhelp: https://machinelearningmastery.com/faq/single-faq/why-does-the-code-in-the-tutorial-not-work-for-me Reply kiki March9,2020at12:18pm # Thanksfortheanswerjason Reply JasonBrownlee March10,2020at5:34am # You’rewelcome. Reply laz March7,2020at2:59pm # Hey,Jason! Again…Thanksforyourawesometutorialsandforgivingyourknowledgetothepublic!>800commentsandnearlyallanswered,you’regreat.Ican’tunderstandhowyoumanageallthat,writinggreatcontent,domlstuff,teach,learn,greatrespect! 2generalquestions: Question(1): Whyandwhendoweneedtoflatten()inputsandinwhichcasesnot? Forexample4numericinputs,alagof2ofeveryinputmeans4*2=8valuesperbatch: Ialwaysdothis,nomatterhowmanyinputsorlags,igivethatasflatarraytotheinput: 1set/batch:[[1.0,1.1,2.0,2.1,3.0,3.1,4.0,4.1]] Input(shape=(8,))#kerasfuncapi Doesitmakesensetoinputastructurelikethis,ifso–why/when? Better?[[[1.0,1.1],[2.0,2.1],[3.0,3.1],[4.0,4.1]]] Question(2): AreyoustillusingTheano?Astheydonotupdateit,itbecomesolder,butnotworse;).ItriedTensorflowalot–butalwayswithlowerperformanceintermsofspeed.Theanoismuchfaster(factor3-10)forme.Butusingmorethan1coreisalwaysslowerforme,inboththeanoandtf.Didyouexperiencedsimilarthings?Ialsotriedtorch,nicebutitwasalsoslowerasthegoodoldtheano.Anyideasoralternatives(ican’tusegpu/external/aws)? Iwouldbehappytoseeyoudoingsomedeepreinforcementlearning(DRL)stuff,whatdoyouthink?Areyou? Regards,keepitup😉 Reply JasonBrownlee March8,2020at6:07am # Youneedtoflattenwhentheoutputshapeofonelayerdoesnotmatchtheinputshapeofanother,e.g.CNNoutputtoaDense. No.Iuseandrecommendtensorflowandhaveforyears.Tensorflowusedtonotworkforwindowsusers,soIrecommendtheanoforthem–andstilldoiftheyhavetrouble.Theanoworksfineandwillcontinuetoworkfineformostapplications. No,RLisnotpractical/useful: https://machinelearningmastery.com/faq/single-faq/do-you-have-tutorials-on-deep-reinforcement-learning Reply laz March8,2020at11:29am # DearJason,thanksforyouranswer;)… “flattenwhentheoutputshapeofonelayerdoesnotmatchtheinputshapeofanother,e.g.CNNoutputtoaDense.” Thanks.Thequestionaboutthe“flatten”operationwasnotabouttheflatten()betweenlayers,itwasabouthowtopresentinputstotheinputlayer.Sorryforbeingvague.MaybeImisunderstoodsomething,arethereusecaseswheretheFEATURES/INPUTS/LAGSarenotflattened? “RLisnotpractical/useful” Isthisstatementbasedonyourexperienceordoyoutaketheopinionofotherswithoutcheckingityourselfhere;)?Pleasedonotmisunderstand,youaretheexperthere.However,icanrefutesomeargumentsagainstRL. Rewardsarehardtocreate:dependsonyourenvironment Unstable:dependsonyourenvironment,code,setup IstartedexperimentingwithasimpleDQN,IexpandeditstepbystepandnowIhavea“DuelingDoubleDQN”.Itlearnswellandquick.Iadmit–onsimpledata.Butitdoesitrepeatableandreproducible!Soiwouldsay:Ingeneral,itworks. Ihavetoseehowitworkswithmorecomplicateddata.ThatiswhyIemphasizedthattheperformanceofthismethodstronglydependsontheareaofapplication. Butthereisahugeproblem,mostpublicsourcescontainincorrectcodeorincorrectimplementations.Ihaveneverreportedorfoundsomanybugsonanysubject.Theseerrorsarecopiedagainandagainandintheendmanythinkthattheyarecorrect.Ihavecollectedtonsoflinksandpdffilestounderstandanddebugthisbeast. Nomatter,youhavetodecideforyourself.Ifyouwanttotakealookatit,takeasimpleexample,eventheDQN(withoutduelingordouble)isabletolearn–ifthecodeiscorrect.AndalthoughI’mnotamathematician:tounderstandhowitworksandwhatpossibilitiesitoffers–mademesmile😉… Reply JasonBrownlee March9,2020at7:14am # FormoreontheinputshapeofLSTMs/1dCNNs,seethis: https://machinelearningmastery.com/faq/single-faq/what-is-the-difference-between-samples-timesteps-and-features-for-lstm-input Idon’tyetseeanROIfor“developersatwork”incoveringRLasdescribedinthelink. Reply laz March8,2020at10:44pm # Interestingread: “WeuseadoubledeepQ-learningnetwork(DDQN)tofindtherightmaterialtypeandtheoptimalgeometricaldesignformetasurfacehologramstoreachhighefficiency.TheDDQNactslikeanintelligentsweepandcouldidentifytheoptimalresultsin~5.7billionstatesafteronly2169steps.Theoptimalresultswerefoundbetween23differentmaterialtypesandvariousgeometricalpropertiesforathree-layerstructure.Thecomputedtransmissionefficiencywas32%forhigh-qualitymetasurfaceholograms;thisistwotimesbiggerthanthepreviouslyreportedresultsunderthesameconditions.” https://www.nature.com/articles/s41598-019-47154-z Reply JasonBrownlee March9,2020at7:16am # Thanksforsharing. Reply YzN March11,2020at4:01am # Literallythebest“firstneuralnetworktutorial” Got85.68accbyaddinglayersanddecreasingbatchsize Reply JasonBrownlee March11,2020at5:29am # Thanks. Welldone! Reply Neha March14,2020at12:05am # HelloJason, Ihaveaquickquestion. Iamtryingtobuildjust1sigmoidneuronforabinaryclassificationtask,basicallyIamimplyingthisishow1sigmoidmodelis: model=Sequential() model.add(Dense(1,activation=’sigmoid’)) Myinputsareimagesofsize=(39*39*3) IamunsureastohowtoinputtheseimagestomyDenselayer(whichistheonlylayerIamusing) Iamcurrentlyusingbelowforinputtingmyimages: train_generator=train_datagen.flow_from_directory(train_data_dir, target_size=(39,39), batch_size=batch_size) class_mode=’binary’) ButsomehowDenselayercannotacceptinputshape(39,39,3). Somyquestionis,howdoIinputmyimagesdatatotheDenselayer? Reply JasonBrownlee March14,2020at8:13am # YoucanflattentheinputoruseaCNNastheinputinsteadthatisdesignedfor3dinputsamples. Reply BertrandBru March29,2020at12:38am # HiJason, Thankyouverymuchforyourtutorial. Iamnewintheworldofdeepleraning.IhavebeenabletomodifyyourcodeandmakeitworkforasetofdataIrecordedwitha3axisaccelerometer.MygoalwastodetectifIwaswalkingorrunning.Irecordedaround50trialsofeachactivities.Fromthesignal,Icalculatedspecificparametersthatenablethecodetodifferenciatethetwoactivities.Amongsttheparameters,Icalculatedforallaxis,themean,minandmaxvalues,andsomeparametersinthedomainfrequencies(the3firstpeakofthepowerspectrumandtheirrespectiveposition). ItworksverywellandIamabletoeasilydetectifIamrunningorwalking. Ithendecidedtoaddathridactivities:standing.Ialsorecorded50trialsofthisactivity.IfItrainmymodelwithstandingandrunning,Icanidentifythetwoactivity.SameifItrainitwithstandingandwalkingorwithwalkingandrunning. ItismorecomplicatedifItrainmymodelwiththethreeactivities.Infact,itcan’tdoit.Itcanonlyrecgonisethefirsttwoactivities.Soforexampleifstanding,walkingandrunninghavethefollowingID:0,1and2,thenitcanonlydetect0and1(standingandwalking).Itthinksthatallrunningtrialsarewalkingtrials.Ifstanding,runningandwalinkinghavethefollowingID:0,1and2,thenitcanonlydetect0and1(standingandrunning).Itthinksthatallwalkingtrialsarerunningtrials. Sohereismyquestion:Assumingyouhavethedataset,ifyouneededtoadaptyourcodesoitcandetectifpeopleare0:notdiabetic,1:peoplearediabetictype1,and2:peoplearediabetictype2,howwouldyoumodifyyourscript? Thankyouverymuchforyourhelp. Reply JasonBrownlee March29,2020at6:00am # You’rewelcome. Welldone. Thisiscalledmulti-classclassification,thistutorialwillhelp: https://machinelearningmastery.com/multi-class-classification-tutorial-keras-deep-learning-library/ Reply BertrandBru March29,2020at7:15am # Thankyousomuchforcomingtomesoquickly. ThisisexactlywhatIwaslookingfor. Cheers, Reply JasonBrownlee March30,2020at5:27am # You’rewelcome. I’mhappytohearthat. Reply DipakKambale March31,2020at10:16pm # HiJason, Igotaccuracy75.52.Isitok??pleaseletmeknow Reply JasonBrownlee April1,2020at5:49am # Welldone.Tryrunningtheexampleafewtimes. Reply islamuddin April1,2020at6:20pm # hellosirjason. sirhowtosatiableaccuracyrunthecodonegivenoutforexample86%nexttime82%howtosolvethis! #import fromnumpyimportloadtxt fromkeras.modelsimportSequential fromkeras.layersimportDense #loadthedataset dataset=loadtxt(‘E:/ms/impotnt/iwp1.csv’,delimiter=’,’) #splitintoinput(X)andoutput(y)variables X=dataset[:,0:8] y=dataset[:,8] #definethekerasmodel #model=Sequential() model=Sequential() #model.add(Dense(25,input_dim=8,init=’uniform’,activation=’relu’)) model.add(Dense(30,input_dim=8,activation=’relu’)) model.add(Dense(95,activation=’relu’)) model.add(Dense(377,activation=’relu’)) model.add(Dense(233,activation=’relu’)) model.add(Dense(55,activation=’relu’)) model.add(Dense(1,activation=’sigmoid’)) #compilethekerasmodel model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’]) #fitthekerasmodelonthedataset model.fit(X,y,epochs=150,batch_size=10) #evaluatethekerasmodel _,accuracy=model.evaluate(X,y) print(‘Accuracy:%.2f’%(accuracy*100)) output 0.1153–accuracy:0.9531 Epoch149/150 768/768[==============================]–0s278us/step–loss:0.1330–accuracy:0.9401 Epoch150/150 768/768[==============================]–0s277us/step–loss:0.1468–accuracy:0.9375 768/768[==============================]–0s41us/step Accuracy:94.01 Reply JasonBrownlee April2,2020at5:46am # ThisisacommonquestionthatIanswerhere: https://machinelearningmastery.com/faq/single-faq/can-you-read-review-or-debug-my-code Reply MHusnainAliNasir April3,2020at2:55am # Traceback(mostrecentcalllast): File“keras_first_network.py”,line7,in dataset=loadtxt(‘pima-indians-diabetes.csv’,delimiter=’,’) File“C:\Users\Hussnain\anaconda3\lib\site-packages\numpy\lib\npyio.py”,line1159,inloadtxt forxinread_data(_loadtxt_chunksize): File“C:\Users\Hussnain\anaconda3\lib\site-packages\numpy\lib\npyio.py”,line1087,inread_data items=[conv(val)for(conv,val)inzip(converters,vals)] File“C:\Users\Hussnain\anaconda3\lib\site-packages\numpy\lib\npyio.py”,line1087,in items=[conv(val)for(conv,val)inzip(converters,vals)] File“C:\Users\Hussnain\anaconda3\lib\site-packages\numpy\lib\npyio.py”,line794,infloatconv returnfloat(x) ValueError:couldnotconvertstringtofloat:‘”6’ IAMHAVINTHEABOVEERRORWHILERUNNINGITPLEaSEHELP.IamusingAnaconda3,Python3.7,tensorflow,keras Reply JasonBrownlee April3,2020at6:57am # Sorrytohearthat,thiswillhelp: https://machinelearningmastery.com/faq/single-faq/why-does-the-code-in-the-tutorial-not-work-for-me Reply MadhawaAkalanka April9,2020at6:22pm # (base)C:\Users\MadhawaAkalanka\pythoncodes>pythonkeras_first_network.py UsingTensorFlowbackend. 2020-04-0913:42:28.003791:Itensorflow/core/platform/cpu_feature_guard.cc:142] YourCPUsupportsinstructionsthatthisTensorFlowbinarywasnotcompiledto use:AVXAVX2 2020-04-0913:42:28.014066:Itensorflow/core/common_runtime/process_util.cc:147 ]Creatingnewthreadpoolwithdefaultinteropsetting:2.Tuneusinginter_op _parallelism_threadsforbestperformance. Traceback(mostrecentcalllast): File“keras_first_network.py”,line12,in model.fix(X,Y,epochs=150,batch_size=10) AttributeError:‘Sequential’objecthasnoattribute‘fix’ Ihadthiserrorwhileit’sbeingrun.pleasehelp. Reply JasonBrownlee April10,2020at8:25am # Sorrytohearthat,seethis: https://machinelearningmastery.com/faq/single-faq/why-does-the-code-in-the-tutorial-not-work-for-me Reply RahimDehkharghani April14,2020at2:05am # DearJason Thanksforyourwonderfulwebsiteandbooks.IamaPhDholderandoneofyourfansinDeepLearning.SometimesIgetdisappointedbecauseIcannotachievemygoalinthisarea.Mygoalistodiscoversomethingnewandpublishit.AlthoughIunderstandyourcodesmostlybuthavingcontributioninthisfieldisdifficultandrequiresunderstandingthewholetheorywhichIhavenotbeenabletodosofar.Canyoupleasegivemesometipstocontinue?Thanksalot Reply JasonBrownlee April14,2020at6:25am # You’rewelcome. Keepworkingoniteveryday.That’smybestadvice. Reply MattGurney April16,2020at10:52pm # Thereisatypo“inputtothemodellisdefined” Reply JasonBrownlee April17,2020at6:21am # Thanks!Fixed. Reply MattGurney April16,2020at11:28pm # UsingthelatestlibrariestodayIgetanumberofwarningsduetolatestnumpy:1.18.1notbeingcompatiblewithlatestTensorFlow:1.13.1. i.e: FutureWarning:Passing(type,1)or‘1type’…(6times) to_int32(fromtensorflow.python.ops.math_ops)isdeprecated Optionsaretoreverttoanoldernumpyorsuppressthewarnings,Itookthesuppressroutewiththiscode: #firstneuralnetworkwithkerastutorial #SuppresswarningsduetoTF/numpyversionincompatibility:https://github.com/tensorflow/tensorflow/issues/30427#issuecomment-527891497 importwarnings warnings.filterwarnings(‘ignore’,category=FutureWarning) importtensorflow #SuppresswarningfromTF:to_int32(fromtensorflow.python.ops.math_ops)isdeprecated:https://github.com/aamini/introtodeeplearning/issues/25#issuecomment-578404772 importlogging logging.getLogger(‘tensorflow’).setLevel(logging.ERROR) importkeras fromnumpyimportloadtxt fromkeras.modelsimportSequential fromkeras.layersimportDense Reply JasonBrownlee April17,2020at6:21am # IrecommendusingKeras2.3andTensorFlow2.1. Reply MattGurney April17,2020at12:18pm # Yes,upgradingtotensorFlow2.1fixedit,IhavenowremovedmywarningssuppressionandIdon’tseethewarningsintheoutput IupgradedTFlikethis: pipinstall–upgradetensorflow Ididfollowyourinstallationinstructionsfromhttps://machinelearningmastery.com/setup-python-environment-machine-learning-deep-learning-anaconda/andendedupwithTFversion1.13.1.ThecommandIranwas: condainstall-cconda-forgetensorflow IamonMac,IseepossiblerelevantdiscussionhereonTF2.1notonconda:https://github.com/tensorflow/tensorflow/issues/35754 Reply JasonBrownlee April17,2020at1:31pm # Welldone! Iusemacportsmyself: https://machinelearningmastery.com/install-python-3-environment-mac-os-x-machine-learning-deep-learning/ Reply meryem April17,2020at1:25am # ThankyouJasonforthetutoriel.IappliedyourexampletominebyaddingdropoutandstandarisationofX X=dataset[:,0:7] y=dataset[:,7] scaler=MinMaxScaler(feature_range=(0,1)) X=scaler.fit_transform(X) #definethekerasmodel model=Sequential() model.add(Dense(6,input_dim=7,activation=’relu’)) model.add(Dropout(rate=0.3)) model.add(Dense(6,activation=’relu’)) model.add(Dropout(rate=0.3)) model.add(Dense(1,activation=’sigmoid’)) model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’] history=model.fit(X,y,epochs=30,batch_size=30,validation_split=0.1) _,accuracy=model.evaluate(X,y) print(‘Accuracy:%.2f’%(accuracy*100)) showsmeanaccuracyof100whichisnotnormal.toadjustmymodel,whatshouldIdo? Reply JasonBrownlee April17,2020at6:22am # Welldone! Perhapsevaluateyourmodelusingk-foldcrossvalidation. Reply meryem April17,2020at7:29am # yesifollowedyourexampleusingk-flodcrossvalidationitgivesmealways100% ifimovestandarisationhegives83%,canyouguidemeplease seed=4 numpy.random.seed(seed) dataset=loadtxt(‘data.csv’,delimiter=’,’) X=dataset[:,0:7] Y=dataset[:,7] fromsklearn.preprocessingimportStandardScaler sc=StandardScaler() X=sc.fit_transform(X) kfold=StratifiedKFold(n_splits=5,shuffle=True,random_state=seed) cvscores=[] fortrain,testinkfold.split(X,Y): model=Sequential() model.add(Dense(12,input_dim=7,activation=”relu”)) model.add(Dropout(rate=0.2)) model.add(Dense(6,activation=”relu”)) model.add(Dropout(rate=0.2)) model.add(Dense(1,activation=”sigmoid”)) model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’]) model.fit(X[train],Y[train],epochs=20,batch_size=10,verbose=1) scores=model.evaluate(X[test],Y[test],verbose=0) print(“%s:%.2f%%”%(model.metrics_names[1],scores[1]*100)) cvscores.append(scores[1]*100) print(“%.2f%%(+/-%.2f%%)”%(numpy.mean(cvscores),numpy.std(cvscores))) Reply JasonBrownlee April17,2020at7:48am # Nicework!Perhapsyourpredictiontaskistrivial? Reply meryem April17,2020at8:08am # youareveryhelpful. orbecauseIdon’thaveenoughdata.SothereisnothingelseIcanuse? Reply JasonBrownlee April17,2020at1:28pm # Perhaps. Reply FarjadHaider April17,2020at11:00pm # SirJasonyouareawesome!Suchaniceandeasytocomprehendthetutorial.GreatWork! Reply JasonBrownlee April18,2020at5:57am # Thanks! Reply JoanEstrada April19,2020at3:51am # “Note,themostconfusingthinghereisthattheshapeoftheinputtothemodelisdefinedasanargumentonthefirsthiddenlayer.ThismeansthatthelineofcodethataddsthefirstDenselayerisdoing2things,definingtheinputorvisiblelayerandthefirsthiddenlayer.” Couldyoubetterexplainthis?Thanks,nicework! Reply JasonBrownlee April19,2020at6:02am # Yes,seethis: https://machinelearningmastery.com/faq/single-faq/how-do-you-define-the-input-layer-in-keras Reply Hany April19,2020at9:57am # Actually,IcannotthankyouenoughDr.Brownlee. GodBlessyou. Reply JasonBrownlee April19,2020at1:14pm # Thanks.You’reverywelcome! Reply Rahim April22,2020at5:56am # DearJason Thanksforthisinterestingcode.Itestedthiscodeonpima-indians-diabetesinmycomputerwithkeras2.3.1butstrangelyIgottheaccuracyof52%.Iwonderwhythereisthismuchdifferencebetweenyouraccuracy(76%)andmine(52%). Reply JasonBrownlee April22,2020at6:10am # You’rewelcome. Perhapstryrunningtheexampleafewtimes? Reply Sarmad April24,2020at8:04pm # wanttoask:inthefirstlayer(ahiddenlayer)aswedefinedinput_dim=8w.r.tfeatureswehaveright.andwespecifyneurons=12.butconcernedisthatathingistudiedisthatwespecifyneuronsw.r.ttoinputs(features).Meansifwehave8inputssoneuronswillalsobe8.butyouspecifiedas12.Why? 2)Inanyofproblemwehavetospecifiedaneuralnetworkright.itcanbeanyeg:convolutional,recurrentetc.sowhichneuralnetworkwehavechoosehere.andwhere? 3)wehavetoassignweights.sowherewehaveassigned? pleaseletmeknow.Thankssir. Reply JasonBrownlee April25,2020at6:44am # Thefirstlineofthemodeldefines2things,theinputorvisiblelayer(8)andthefirsthiddenlayer(12).Morehere: https://machinelearningmastery.com/faq/single-faq/how-do-you-define-the-input-layer-in-keras Thesetwothingscanhavedifferentvalues,theyarenotdirectlyrelated. Yes,thiswillhelpyouchoosemodels: https://machinelearningmastery.com/when-to-use-mlp-cnn-and-rnn-neural-networks/ Weightsareassignedsmallrandomnumbersautomaticallywhenyoucallcompile(): https://machinelearningmastery.com/why-initialize-a-neural-network-with-random-weights/ Reply Sarmad April26,2020at7:44pm # sirstillconfusethatasinMLalgorithmwespecifywhichalgorithmtoimplementwrttoscenariolikeforregressionwecanchooselinearregression,logisticregressionetc. nowatthistimewhatneuralnetwehavechosen?convoltiona,rntnetc? Reply JasonBrownlee April27,2020at5:33am # Linearregressionisforregression,logisticregressionisforclassification. Herearesomeregressionalgorithmstotryonaregressiontask: https://machinelearningmastery.com/spot-check-regression-machine-learning-algorithms-python-scikit-learn/ Reply Sarmad April24,2020at8:31pm # wherearetheweights,biasandinputvalues? Reply JasonBrownlee April25,2020at6:46am # Weightsareinitializedtosmallrandomvalueswhenwecallcompile(). Reply mouna April26,2020at8:51pm # HelloJason, Congratulationsfroallthegoodjob,iwanttoaskyou: Howwecanknowofallepochstheaverageoftrainingtimeandvalidationtimeforamodel? Reply JasonBrownlee April27,2020at5:34am # Youcouldextrapolatethetimeofoneepochtothenumberofepochsyouwanttotrain. Reply JasonChia April28,2020at2:41pm # HiJason, Iamverynewtodeeplearning.Iunderstandthatyoudomodel.fittofitthedataandmodel.predicttopredictthevaluesoftheclassvariabley.However,isitalsopossibletoextracttheparameterestimateandderivef(X)=y(similartoregression)? Reply JasonBrownlee April29,2020at6:15am # Perhapsforsmallmodels,butitwouldbeamesswiththousandsofcoefficients.Themodeliscomplexcircuit. Reply Dina April28,2020at4:34pm # HiJAson,doyouhaveanideaonhowtopredictpriceorrangeofvalue? Reply Dina April28,2020at4:39pm # IfIusekerasmodeltopredictprice/rangeofvalue,itispossibleformetofindtheaccuracyofkerasmodel?becauseinyourarticleonlytopredictthebinaryoutput Reply JasonBrownlee April29,2020at6:19am # Youaredescribingaregressionproblem,Irecommendstartinghere: https://machinelearningmastery.com/regression-tutorial-keras-deep-learning-library-python/ Reply JasonBrownlee April29,2020at6:17am # Apredictionrangeiscalledapredictioninterval,learnmorehere: https://machinelearningmastery.com/prediction-intervals-for-machine-learning/ Reply Hume May5,2020at10:54am # thankyouforyourexplanation,iamabeginnerformachinelearningaswellaspython.woluldyoupleasehelpmeingettingtheexactCSVdatafileforpredictingtheHepatitisBvirus. Reply JasonBrownlee May5,2020at1:37pm # Thiswillhelpyoulocateadataset: https://machinelearningmastery.com/faq/single-faq/where-can-i-get-a-dataset-on-___ Reply AbabouNabil May12,2020at2:01pm # 768/768[==============================]–2s3ms/step Accuracy:76.56 Reply JasonBrownlee May13,2020at6:21am # Welldone! Reply MAHESHMADHUSHAN May24,2020at11:29am # Whydidn’tyounormalizedata?Isnotthatnecessary?Ihaveseenonsometutorials,theynormalizedataforcommonscaleusingas–>fromsklearn.preprocessingimportStandardScaler.Whatisthedifferencethatmethodandyourmethod? Reply JasonBrownlee May25,2020at5:43am # Itcanhelpforsomealgorithmstonormalizeorstandardizethedatainputdata.Perhapstryitandsee. Reply HenryLevkine May26,2020at7:49am # Jason, Youarethebest! Mynameforyourprogramhereis“helloDL.py” Iamsureyourfuturebook“HelloDeepLearning”willbethemostpopularonthemarket. Peopleneedinprograms helloClassification.py helloRegression.py helloHelloPrediction.py helloDogsCats.py helloFaces.py andsoon! Thankyouforyourhardwork! Reply JasonBrownlee May26,2020at1:19pm # Thanks. Youcanfindalloftheseontheblog,usethesearch. Reply Thijs June12,2020at12:46am # Hello, isthereapossibilitytoaccesstheaccuracyofthelastepoch?Ifyes,howcaniaccessthisandsaveit? Kindregards Reply JasonBrownlee June12,2020at6:14am # Yes,thehistoryobjectcontainsthescorescalculatedoneachepoch: https://machinelearningmastery.com/display-deep-learning-model-training-history-in-keras/ Reply Krishan June16,2020at11:08am # Accuracy:82.42 epochs=1500 batch_size=1 Idon’tknowifwhatIdidwasappropriate.Anyadviseisappreciated. Reply JasonBrownlee June16,2020at1:39pm # Welldone! Reply Saad June19,2020at9:20pm # HiJason, Thanksalotforthiswonderfullearningplatform. Whywere12neuronsusedinthefirsthiddenlayer,whatisthecriteriabehindit?Isitrandomorthereisanunderlyingreason/calculation? (Ipresumedthatthenumberofneuronsinahiddenlayerwouldalwaysbebetweenthenumberofinputsandthenumberofoutputs) Reply JasonBrownlee June20,2020at6:12am # Ichosetheconfigurationafteralittletrialanderror. Thereisnogoodtheoryforconfiguringneuralnets: https://machinelearningmastery.com/faq/single-faq/how-many-layers-and-nodes-do-i-need-in-my-neural-network Reply ParasMemon July30,2020at9:05am # HelloJason, Ihavethisshapeoftrainingandtestingdatasets: xTrain_CN.shape,yTrain_CN.shape,xTest_CN.shape ((320,56,6251),(320,),(80,56,6251)) Iamgettingthiserror:ValueError:Errorwhencheckinginput:expecteddense_20_inputtohave2dimensions,butgotarraywithshape(320,56,6251) Belowisthecode: defnn_keras(xTrain_CN,yTrain_CN,xTest_CN): model=Sequential() model.add(Dense(12,input_dim=6251,activation=’relu’)) model.add(Dense(8,activation=’relu’)) model.add(Dense(1,activation=’sigmoid’)) #compilethekerasmodel model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’]) #fitthekerasmodelonthedataset model.fit(xTrain_CN,yTrain_CN,epochs=150,batch_size=10) #evaluatethekerasmodel _,accuracy=model.evaluate(xTrain_CN,yTrain_CN) print(‘TrainingAccuracy:%.2f’%(accuracy*100)) _,accuracy=model.evaluate(xTrain_CN,yTrain_CN) print(‘TestingAccuracy:%.2f’%(accuracy*100)) nn_keras(xTrain_CN,yTrain_CN,xTest_CN) Reply JasonBrownlee July30,2020at1:44pm # AMLPmusttake2ddataasinput(rowsandcolumns)and1ddataasoutputduringtraining. Reply Joanne August12,2020at1:25am # HiJason, Thisisagreattutorial,veryeasytounderstand!!Isthereatutorialforhowtoaddweightandbiasintoourmodel? Reply JasonBrownlee August12,2020at6:11am # Thanks! Reply LuisCordero August20,2020at12:05pm # Hello,ifIhaveapredictionproblem,itisabsolutelynecessarytoscaletheinputvariablestousethesigmoidorreluactivationfunctionsortheoneyoudecidetouse? Reply JasonBrownlee August20,2020at1:37pm # No,buttryitandcompareresults. Reply LuisCordero August20,2020at1:15pm # howIcancreateaconfigurationthathasmorethanoneoutput,i.e.theoutputlayerhas2ormorevalues Reply JasonBrownlee August20,2020at1:39pm # Yes,justspecifythenumberoftargetsintheoutputlayerandprepareyourtrainingdataaccordingly. Ihaveatutorialonexactlythiswrittenandscheduled–fornextweekIthink. Reply LuisCordero September1,2020at4:29pm # whatwillbeennameoftutorialtofindit Reply JasonBrownlee September2,2020at6:24am # Righthere: https://machinelearningmastery.com/deep-learning-models-for-multi-output-regression/ Reply SimonSuarez August30,2020at8:27am # HiJason. Ithankyouforthegreatqualityofthisarticle.IamexperiencedwithMachineLearningusingScikit-Learn,andreadingthispost(andsomeofyourpreviousonthetopic)helpedmealottogetintomakingMultilayerPerceptrons. ItestedtheknowledgeIlearnedherewiththeWisconsinDiagnosticBreastCancer(WDBC)dataset.Igotaround92.965%Accuracyfortrainand96.491%fortest,onlyusing3features(radius,texture,smoothness)andthefollowingtopology: • Epochs=250 • Batch_size=60 • Funcióndeactivación=ReLu • Optimizador=‘Nadam’ Layer;Numberofneurons;Activationfunction Input;3;None Hidden1;4;ReLu Hidden2;4;ReLu Hidden3;2;ReLu Output;1;Sigmoid Trainandtestweresplittedusing:train_test_split(X,y,test_size=0.33,random_state=42) Thanks! Reply JasonBrownlee August31,2020at5:58am # Thanks. WelldoneonyourresultsSimon! Reply BernsBuenaobra September7,2020at7:32am # 0s833us/step–loss:0.4607–accuracy:0.7773 Reply JasonBrownlee September7,2020at8:36am # Welldone! Reply BernsBuenaobra September7,2020at7:37am # SeconditerationwithlaptopGPUgives: 0s958us/step–loss:0.4119–accuracy:0.8216 Accuracy:82.16 Reply AhmedNuru September8,2020at5:01pm # Hijansonhowcanpredictimageforgeryandgenuineusingpretraineddeep-learningmodel Reply JasonBrownlee September9,2020at6:44am # Perhapsprepareadatasetofrealandfakeimagesandtrainabinaryclassificationmodeltodifferentiatethetwo. Perhapsthistutorialwillhelpyoutogetstarted: https://machinelearningmastery.com/how-to-develop-a-convolutional-neural-network-to-classify-photos-of-dogs-and-cats/ Reply FatmaZohra September11,2020at2:30am # HelloJason, CanyoupleaseguidemehowtomakeaqueryandadocumentasaninputinourNN(knowingthattheybotharerepresentedbyfrequencyvectors)? Reply JasonBrownlee September11,2020at6:01am # Perhapsstarthere: https://machinelearningmastery.com/start-here/#nlp Reply fatmazohra September13,2020at2:41am # HiDrJason, Thanksalotforthereply,thelinkwasusefulforme, yeti’amstilllostabitsincei’amnewdealingwithNN,actualyiwanttocalculatethesimilaritybetweenthequeryandthedocusingtheNN,theinputsare(theTFvectorofthedocandTFvectorofthequery,andtheoutputisthesimilarity(0ifno,1ifyes),ihavetheideaofmyNNbutidon’tknowfromwheretostart… iwouldbegratfulifyoucouldhelpme(asimilarcodethaticantakeasexemplemaybe), Waitingforyourreply..thanksinadvance Reply JasonBrownlee September13,2020at6:10am # Ithinkyou’reaskingaboutcalculatingtextsimilarity.Ifso,sorryIdon’thavetutorialsonthattopic. Reply fatmazohra September13,2020at6:38am # yeah,thisiswhatiwasaskingfor,anywaysthanksalotforyourtutorialstheyareveryclearandfruitful.. Reply JasonBrownlee September13,2020at8:28am # You’rewelcome. Reply yibrahfisseha September22,2020at11:41pm # Iwouldliketothankyoualotforyourtutorials.canyoupleaseguidemeonhowtoevaluatethemodelusingconfusionmatrixparameterssuchasrecall,precision,f1score? Reply JasonBrownlee September23,2020at6:40am # Yes,hereareexamples: https://machinelearningmastery.com/how-to-calculate-precision-recall-f1-and-more-for-deep-learning-models/ Reply derya September23,2020at5:03am # greattutorialhelpedalot! Reply JasonBrownlee September23,2020at6:44am # Thanks! Reply SeanH.Kelley September23,2020at6:38am # HiJason,thankyouverymuchforthis. Iappreciatetheextraindepthexplanationsinthelinkstootherpages. Iamwonderinghowtokeepthestateofmind.Likeyoutrainitwhileitrunsandgetalevelofaccuracy.Ifyoufinallygetthelevelofaccuracyfromtrainingacertainconfiguration,howdoyoukeepthatconfiguration/stateofmind/levelofaccuracyoftheartificialneuralnetwithouthavingtotrainitalloveragain? Canyoustoreasnapshotofthat“stateofmind”somewheresothatwhenyouhaveagoodworkingmodel,youjustusethattorunnewdataagainstoramIstillmissingsomekeyelementsinmyattemptingtograspthis? Thankyou! Reply JasonBrownlee September23,2020at6:46am # Youcansaveyourmodelandloaditlatertomakepredictions,seethistutorial: https://machinelearningmastery.com/save-load-machine-learning-models-python-scikit-learn/ Reply SeanH.Kelley September24,2020at12:53am # Thankyouverymuch! Reply JasonBrownlee September24,2020at6:16am # You’rewelcome. Reply MuhammadAsadArshed October10,2020at12:34am # Awesomeblogandtechnicalskillwouldyouliketorefermetosomeotherblogs. Reply JasonBrownlee October10,2020at7:06am # Thanks! Reply Brijesh October10,2020at5:57pm # Hi CanweuseonlyCSVfileformat? Reply JasonBrownlee October11,2020at6:44am # No,deeplearningcanuseimages,textdata,audiodata,almostanythingthatcanberepresentedwithnumbers. Reply imene October18,2020at4:49am # withepoch=10000andbatch-size=20agotaccuracy=84%andloss=loss:0.3434 Reply JasonBrownlee October18,2020at6:12am # Welldone! Reply YAŞARSAİDDERDİMAN December27,2020at4:12pm # thisisgoodbutprobably,yourmodel’sgeneralizationerrorishigher.Becausemoreepochmeansmoreoverfitting,Thereforeyoushoulduselessepochforanydeeplearningtraining. Reply JasonBrownlee December28,2020at5:58am # Goodadvice. Reply imene October18,2020at4:59am # firstthanksforyourgoodexplanation, howcanisavethetrainedmodeltobeusedfortestbecausthetrainnigrepeateachtimeitrytoexecutetheprogram tanks. Reply JasonBrownlee October18,2020at6:12am # Goodquestion,thiswillshowyouhow: https://machinelearningmastery.com/save-load-keras-deep-learning-models/ Reply Fatima October24,2020at5:18am # HiJason,IappliedtheDeepNeuralNetworkalgorithm(DNN)todotheprediction,Itworksanditisperfect,IhaveaprobleminevaluatingthepredictedresultsIused(metrics.confusion_matrix),Itgavemethiserror: ValueError:Classificationmetricscan’thandleamixofbinaryandcontinuoustargets anysuggestionstosolvetheerror? note:myclasslabel(outcomevariable)isbinary(0,1) Thanksinadvanced Reply JasonBrownlee October24,2020at7:12am # Seethistutorial: https://machinelearningmastery.com/how-to-calculate-precision-recall-f1-and-more-for-deep-learning-models/ Reply KAl October27,2020at2:53am # Firstofall,pleaseallowmetothankyouforthisgreattutorialandforyourvaluabletime. Iwonder:youtrainedandevaluatedthenetworkonthesamedataset.Whydidnotitgeneratea100%accuracythen? Thanks Reply JasonBrownlee October27,2020at6:46am # Allmodelshaveerror. Ifwegetperfectskill/100%accuracythentheproblemislikelytoosimpleandmachinelearningisnotrequired: https://machinelearningmastery.com/faq/single-faq/what-does-it-mean-if-i-have-0-error-or-100-accuracy Reply Zuzana November1,2020at11:15pm # Hi,greattutorial,everythingworks,exceptwhentryingtoaddpredictions,Igetthefollowingerrormessage.Couldyouplease,help?Thanksalot. WARNING:tensorflow:FromC:/Users/ZuzanaŠútová/Desktop/RTPnew/3_training_deep_learning/data_PDS/keras_first_network_including_predictions.py:27:Sequential.predict_classes(fromtensorflow.python.keras.engine.sequential)isdeprecatedandwillberemovedafter2021-01-01. Instructionsforupdating: Pleaseuseinstead:*np.argmax(model.predict(x),axis=-1),ifyourmodeldoesmulti-classclassification(e.g.ifitusesasoftmaxlast-layeractivation).*(model.predict(x)>0.5).astype("int32"),ifyourmodeldoesbinaryclassification(e.g.ifitusesasigmoidlast-layeractivation). Warning(fromwarningsmodule): File“C:\Users\ZuzanaŠútová\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\keras\engine\sequential.py”,line457 return(proba>0.5).astype(‘int32’) RuntimeWarning:invalidvalueencounteredingreater Traceback(mostrecentcalllast): File“C:\Users\ZuzanaŠútová\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\core\indexes\base.py”,line2895,inget_loc returnself._engine.get_loc(casted_key) File“pandas\_libs\index.pyx”,line70,inpandas._libs.index.IndexEngine.get_loc File“pandas\_libs\index.pyx”,line101,inpandas._libs.index.IndexEngine.get_loc File“pandas\_libs\hashtable_class_helper.pxi”,line1032,inpandas._libs.hashtable.Int64HashTable.get_item File“pandas\_libs\hashtable_class_helper.pxi”,line1039,inpandas._libs.hashtable.Int64HashTable.get_item KeyError:0 Theaboveexceptionwasthedirectcauseofthefollowingexception: Traceback(mostrecentcalllast): File“C:/Users/ZuzanaŠútová/Desktop/RTPnew/3_training_deep_learning/data_PDS/keras_first_network_including_predictions.py”,line30,in print(‘%s=>%d(expected%d)’%(X[i].tolist(),predictions[i],y[i])) File“C:\Users\ZuzanaŠútová\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\core\frame.py”,line2902,in__getitem__ indexer=self.columns.get_loc(key) File“C:\Users\ZuzanaŠútová\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\core\indexes\base.py”,line2897,inget_loc raiseKeyError(key)fromerr KeyError:0 Reply JasonBrownlee November2,2020at6:40am # Sorrytohearthat,thismayhelp: https://machinelearningmastery.com/faq/single-faq/why-does-the-code-in-the-tutorial-not-work-for-me Reply Zuzana November2,2020at6:50am # Iamsorrybutnoneofthathelped:/ Reply JulianAEpps November3,2020at7:58am # WherecanIfinddocumentationonthesekerasfunctionsthatyouareusing.Idon’tknowhowanyofthesefunctionswork. Reply JasonBrownlee November3,2020at10:08am # Goodquestion,here: https://keras.io/api/ Reply UmairRasool November8,2020at4:42am # HelloSir,iamnotactuallyfamiliarwithMLsosomeonedoingmytaskforpredictionusingrasterdatasetwithpython.HejustgivingfinalresultsandCSVfileratherthanfinalpredictionmapasraster,CouldyoupleaseguidemeMLworkslikethisorheismissingsomethingtogeneratefinalmap.PleaseResponse.Thanks Reply UmairRasool November8,2020at4:44am # sorryihavealittlemistake“finalresultasCSVfile” Reply JasonBrownlee November8,2020at6:42am # Perhapsthisframeworkwillhelp: https://machinelearningmastery.com/start-here/#process Reply Halil November27,2020at6:09am # Thankyouforthisbrilliantlyexplainedtutorial!Actually,Iamboredofwatchingvideoswhichhavelotsofboringtalksandsuperficialexplanations.Idiscoveredmymainresourcenow Bytheway,Iguessthereisanerrorhere.No? rounded=[round(x[0])forxinpredictions]—>shouldbe“round(X…..” Reply JasonBrownlee November27,2020at6:44am # You’rewelcome. Therearemanywaystoroundanarray. Reply Halil November30,2020at5:45am # Imean,that“x”shouldbe“X”.No? Reply RAJSHREESRIVASTAVA November28,2020at4:05am # Hijason, Hopeyouaredoingwell.IamworkingonANNforimageclassificationingooglecolab.Iamgettingthiserror,canyouhelpmetofindsolutionforthis? InvalidArgumentError:Incompatibleshapes:[100,240,240,1]vs.[100,1] [[nodegradient_tape/mean_squared_error/BroadcastGradientArgs(definedat:14)]][Op:__inference_train_function_11972] Functioncallstack: train_function Waittingforyourreply. Reply JasonBrownlee November28,2020at6:41am # Sorry,Idon’tknowaboutcolab: https://machinelearningmastery.com/faq/single-faq/do-code-examples-run-on-google-colab Reply RAJSHREESRIVASTAVA November28,2020at8:14pm # Hijasonthanksforyourreply. okinpythonIamworkingonANNforimageclassification.Iamgettingthiserror,canyouhelpmetofindsolutionforthis? InvalidArgumentError:Incompatibleshapes:[100,240,240,1]vs.[100,1] [[nodegradient_tape/mean_squared_error/BroadcastGradientArgs(definedat:14)]][Op:__inference_train_function_11972] Functioncallstack: train_function Reply JasonBrownlee November29,2020at8:12am # Sorry,thecauseoftheerrorisnotclear,youmayneedtodebugyourmodel. Herearesomesuggestions: https://machinelearningmastery.com/faq/single-faq/can-you-read-review-or-debug-my-code Reply Hanem December17,2020at11:07am # Thanksamillion,ithelpedmealot.Actually,allofyourarticlesareinformativeandgoogguideforme. Reply JasonBrownlee December17,2020at12:59pm # You’rewelcome,I’mhappytohearthat! Reply JohnSmith December28,2020at7:58am # ThiswasabrillianttutorialIthinkwhatcouldbedonetoimprovethisisaddinganexampleofactualpredictions. ThepredictionbitisquitebriefIdon’tquitehaveanunderstandinghowtousethatarrayof“predictions”toactuallypredictsomething. LikeifIwantedtofeeditsometestdataandgetapredictionhowcouldIdothat? Iwillconsultsomeofyourotherhelpfulguidesbutwouldbegreattohaveitallinthis1tutorial. Reply JohnSmith December28,2020at8:07am # IdidnothavemycoffeewhenIwrotethis. Iseenowwearepassingtheoriginalvariablesbackintothemodelandpredictingandprintingoutthepredicationvsactual. 🙂 Thanks–youmadeagreattutorial! Haveagoodchristmasandnewyear. Reply JasonBrownlee December28,2020at8:19am # Noproblematall! I’mhappyithelpedyoukickstartyourjourneywithdeeplearning. Reply Joe January3,2021at5:00am # HiJason, Happynewyear! Youarepredictingonthesamedataset,X,thatyouusedtotrainthemodel. Iwouldhavethoughtthatthemodelwould’veproducedcloseto100%accuracyinthiscasesincethemodelissowelltrainedspecificallywithrespecttoX(maybeevenoverfitted). Whyareweonlygetting76.9%accuracy,notcloseto100%? Thanks Joe Reply JasonBrownlee January3,2021at6:00am # Yes,Ithattokeeptheexamplesimple,Iexplainmorehere: https://machinelearningmastery.com/faq/single-faq/why-do-you-use-the-test-dataset-as-the-validation-dataset Nomodelisperfect,theyarealltryingtogeneralizefromthetrainingdata. Reply RobertoAguirreMaturana January7,2021at12:19pm # Excelenttutorial,wellexplainedandveryeasytofollow.Itseemsyouhavetoupdateonelinethatwasdeprecatedin2021: #insteadof #predictions=model.predict(X) #nowyouhavetouse predictions=(model.predict(X)>0.5).astype(“int32”) Reply JasonBrownlee January7,2021at2:04pm # Thanks. Idon’tthinkso: https://keras.io/api/models/model_training_apis/#predict-method And: https://www.tensorflow.org/api_docs/python/tf/keras/Sequential#predict Ifyouwantlabelsyoucanusemodel.predict_classes(),thiswillhelp: https://machinelearningmastery.com/how-to-make-classification-and-regression-predictions-for-deep-learning-models-in-keras/ Reply GirishAhire January8,2021at8:27pm # Igot65% Reply JasonBrownlee January9,2021at6:41am # Welldone! Reply TomRauch January15,2021at6:37am # Hi,IhavetheseinstalledinmyVirtualEnv(alongwithotherlibraries) Keras==2.4.3 Keras-Preprocessing==1.1.2 ButwhenIrunthis: #firstneuralnetworkwithkerastutorial fromnumpyimportloadtxt fromkeras.modelsimportSequential fromkeras.layersimportDense Igeta‘DeadKernel’errormessageinjupyter;thefirstlinerunsfinebutthe‘deadkernel’messageappearswhenitgetstokeras. Anyideaonhowtofix? Thanks! Reply JasonBrownlee January15,2021at8:46am # Irecommendnotusinganotebookastheycauseproblemsforalmosteveryone: https://machinelearningmastery.com/faq/single-faq/why-dont-use-or-recommend-notebooks Insteadsavethecodeusingasimpletexteditorlikesublimeoratomandrunthescriptfromthecommandline: https://machinelearningmastery.com/faq/single-faq/how-do-i-run-a-script-from-the-command-line Reply TomRauch January15,2021at9:32am # ThankyouJason!Iwillgivethecommandlineatry. Tom Reply JasonBrownlee January15,2021at11:32am # You’rewelcome. Reply TomRauch January15,2021at12:22pm # HiJason,IfollowedyourinstructionsbutstillrunningintoissueswithKeras,maybeIdidnotinstallitcorrectly? (rec_engine)[email protected]:~/code$pythonkeras.py Traceback(mostrecentcalllast): File“keras.py”,line3,in fromkeras.modelsimportSequential File“/home/tom/code/keras.py”,line3,in fromkeras.modelsimportSequential ModuleNotFoundError:Nomodulenamed‘keras.models’;‘keras’isnotapackage butwhenIrunthis,Idoseeitinstalled (rec_engine)[email protected]:~/code$piplist|grepKeras Keras2.4.3 Keras-Preprocessing1.1.2 Ifollowedthepipinstallfoundinthisguide: https://www.liquidweb.com/kb/how-to-install-keras/ IthinkmynextstepmaybetocreateanewVirtualEnvforjustKerasandTensorFlow. Thanks,Tom Reply JasonBrownlee January15,2021at1:26pm # Ithinktheremaybeanissuewithyourenvironment,perhapsthistutorialwillhelp: https://machinelearningmastery.com/setup-python-environment-machine-learning-deep-learning-anaconda/ Reply GovindKelkar January15,2021at10:58pm # HiDr.Jason, IexecutedyourcodeingooglecolabandgotitexecutingonlychangeIfoundiswhilepredictingthenewdata youhadlistedthesequenceas10101andIgotitas01010 Alsodidthefewchangestothecode. NonethelessIgotthecodeworkingatleast.NowIwilltryandplaywithittogetmoreaccuracies. Reply JasonBrownlee January16,2021at6:55am # Welldone! Reply TomRauch January16,2021at9:18am # HiJason,IcreatedanewvirtualenvandloadedKeras,TensorFlowetcandcreateda.pywithallofyourcode,thenranitatthecommandlineinthedirectorythatcontainsboththecsvandpy. But,Igotthiserror: (ML)[email protected]:~/code$pythonmykerasloader.py Illegalinstruction(coredumped) IstherealoggerIshouldbeusingtoseemoredetail? Thanks,Tom Reply JasonBrownlee January16,2021at1:20pm # Thatdoesnotlookgood,Isuspectthereissomethingupwithyourenvironment. Perhapsyoucantryposting/searchingonstackoverflow.com Reply FranciscoSantiago January17,2021at9:51am # Creatingneuralnetwork 24/24[==============================]–0s756us/step–loss:0.3391–accuracy:0.8503 Accuracy:85.03 Wohooo!! Reply JasonBrownlee January17,2021at1:27pm # Welldone! Reply Jeremy January17,2021at4:38pm # Dr.Brownlee, Goodmorning,sir!Curiousforyourthoughtsonsomething:istherevalueinrunningthealgorithm,say,fiftytimesandaveragingtheaccuracy?I’veusedthattechniquebeforetogoodeffect,butsincethisisrelativelynewtome,havinganexperiencedteacherofmachinessetmestraightwouldbehelpful. Ifthisissomethingyouthinkisuseful,Ihaveonemorequestionthatcomesfrommystilllimitedunderstandingofthings:wherewouldIstartthe‘for’loop?Myfirstthoughtwasstartingitbefore‘model=Sequential()’,butthatwouldmeanredefiningtheNNstructureeachtime,whichdoesn’tmakemuchsense.Secondthoughtwasstartingitbefore‘model.fit()’,inwhichcasethemodelstaysthesame,andloss/optimizationfunctionsstaythesame. Thankyouverymuchforyourtime! V/r, Jeremy Reply JasonBrownlee January18,2021at6:05am # Yes,itreducesthevarianceinthemethodandcanbeusedforbothevaluatingmodelperformanceandmakingpredictions. Moredetailsarehere: https://machinelearningmastery.com/faq/single-faq/why-do-i-get-different-results-each-time-i-run-the-code Theloopisaroundthedefinition,trainingandevaluationofthemodel. Reply TomRauch January18,2021at6:33am # HiJason,anytutsonusingyourcodeinthispostinginGooglecolabs?Notsurehowtopointtothecsvusingcolabs. Thanks,Tom Reply JasonBrownlee January18,2021at8:58am # ThisisacommonquestionthatIanswerhere: https://machinelearningmastery.com/faq/single-faq/do-code-examples-run-on-google-colab Reply Anna January21,2021at8:53am # HelloJasonIhaveaquestion. Iwanttocreateamodeltopredicttheurbandevelopment.Istartedwithyourmodelabove. Iusetheinformationabouttheurbanandthenon-urbanpointsfor4years(2000,2006,2012,2018).Ialsouseinformationabouttheslopeandsomedistancesforeverypoint. Ihavecreateadatasetwitchcontainsinformationinthecolumnslikethis. 2000-2006 2006-2012 AfterthetrainIhaveaccuracy94% ButwhenIgivetothemodeltheyear2006itdoesn’tpredictthe2012verywell.Theremanyproblems. Ithoughtthatwiththisaccuracythemodelwouldhavepredictthe2012verywell. Idon’twhereitmightbetheproblem…Atthetrainsection,atthepredictorsomewhereelse?? PleasetellyouropinionbecauseIamstuckinthisforweeksandIhavetofindthesolutionquickly!!!! Reply JasonBrownlee January22,2021at7:13am # Itsoundslikeyourworkingwithatimeseriesdataset. Ifso,itwouldnotbevalidtotrainthemodelonthefutureandpredictthepast. Irecommendstartinghere: https://machinelearningmastery.com/start-here/#deep_learning_time_series Reply JamesParker January22,2021at8:43pm # ThankyouforthisgreatarticlebutIhaveaquestionwhatdoes_,beforeaccuracystandsfor Isearcheditontheinternetbutcouldn’tfindit Reply JasonBrownlee January23,2021at7:04am # Weuseunderscore(_)inpythontoeatupreturnvaluesorvariableswedon’tcareabout.Inthiscasetheloss,asweonlycareaboutaccuracy. Reply FOGANGFOKOA January24,2021at12:43pm # Hello, Inputanarrayof(50385,)whereeachisanarrayof(x,127)intoMLP) Iwanttoinputanumpy2darrayintoMLPbutIhaveanarrayof50395rowsthatcontainsmany2darrayofshape(x,129).xbecausesomematriceshavedifferentrownumbers.Hereisanexample: train[‘spec’].shape >>(50395,) train[‘spec’][0].shape >>(41,129) train[‘spec’][5].shape >>(71,129) Hereansnippetofmycode: X_train=train[‘spec’].values;X_valid=valid[‘spec’].values y_train=train[‘label’].values;y_valid=valid[‘label’].values model.add(Dense(12,input_shape=(50395,),activation=’relu’)); model.fit(X_train,y_train,validation_data=(X_valid,y_valid),epochs=500,batch_size=1); Igetthiserroronlastline(model.fit): ValueError:Errorwhencheckinginput:expecteddense_54_inputtohaveshape(50395,)butgotarraywithshape(1,) Howtofixthisproblemsothatthenetworkcantakeasinputall50395matricesofshape(x,129)? Reply JasonBrownlee January24,2021at12:52pm # Perhapstheyare“timesteps”andifsothismayhelp: https://machinelearningmastery.com/faq/single-faq/what-is-the-difference-between-samples-timesteps-and-features-for-lstm-input Andthenpadallsequencestothesamelength: https://machinelearningmastery.com/data-preparation-variable-length-input-sequences-sequence-prediction/ Reply FOGANGFOKOA January24,2021at1:40pm # InfactIabsolutelymustuseanMLP.Ihadsoundsof1soffrequency16000hz.Asaresult,allofmyaudiogavemeanarrayof16000.Afterremovingthesilenceinthoseaudios,Iendedupwitharraysofdifferentsizes. ThenItransformedtheseaudiointoanumpymatrixofnumbersusingthespectrogramealgorithmtoinputthemtotheneuralnetwork. Iendedupwithmatricesof2dimensionsofthesamecolumnsbutofdifferentrows. isitpossibletopasstheminknowingthatthematrixhavedifferentsizes? Reply JasonBrownlee January25,2021at5:47am # Asafirststep,perhapstrypaddingallinputstothesamesizeanduseamaskinginputlayerfollowedbydense/mlparchitecture. Reply FOGANGFOKOA January28,2021at12:56am # Ididasyouadvisedme.AndIpassedthisdifficultythere!Nowmycodelookslikethis model=Sequential(); model.add(Dense(units=8,input_shape=(71,129),activation=’relu’)); model.add(Dense(units=8,activation=’relu’)); model.add(Dense(units=11,activation=’sigmoid’)); #Compilemodel model.compile(loss=’categorical_crossentropy’,optimizer=’sgd’,metrics=[‘accuracy’]); #model=mpl_model(); X_train=list(train_df[‘spec’]);X_valid=list(valid_df[‘spec’]); y_train=train_df[‘label’];y_valid=valid_df[‘label’]; #labels=[‘yes’,‘no’,‘up’,‘down’,‘left’,’right’,‘on’,‘off’,‘stop’,‘go’]; encoder=LabelEncoder(); encoder.fit(y_train); encoded_y_train=encoder.transform(y_train); dummy_y_train=to_categorical(encoded_y_train); #Fitmodel,validation_data=(np.array(X_valid),y_valid) model.fit(np.array(X_train),np.array(list(dummy_y_train)),epochs=50,batch_size=50); andIgetthiserror: ValueError:Atargetarraywithshape(50395,11)waspassedforanoutputofshape(None,71,11)whileusingaslosscategorical_crossentropy.Thislossexpectstargetstohavethesameshapeastheoutput. Reply JasonBrownlee January28,2021at6:01am # Ouch,looksliketheshapeofthedatadoesnotmatchtheexpectationsofthemodel. Perhapsfocusontheprepareddataandinspectitaftereachchange–getthatright,thenfocusonthemodelingpart. Reply FOGANGFOKOA January29,2021at7:28am # Okay.It’sdoneandItworkswell..thankyou Reply JasonBrownlee January29,2021at7:40am # Nicework! Reply KinsonVERNET January29,2021at1:06am # Hello,thankyouforthistutorial. For100timesIgotscore=76.82fortheaccuracy. Reply JasonBrownlee January29,2021at6:06am # Welldone! Reply Kamal January30,2021at12:14pm # It’sasuperbtutorialtoimplementyourfirstdeepneuralnetworkinPython.Thankyou,dearJasonBrownlee. Reply JasonBrownlee January30,2021at12:35pm # Thanks,welldoneonyourprogress! Reply Rob February18,2021at1:59pm # Hithere, I’mcurrentlystuckonfittingthemodel.OnlythingIhavedonedifferentlyisuseread_csvsoIdidn’thavetoputanythinglocally.ButI’vevalidatedtheX/youtputstobethesame. Myerroris: ValueError:logitsandlabelsmusthavethesameshape((None,11)vs(None,1)) Reply JasonBrownlee February19,2021at5:53am # Itsuggestsyourdatawasnotloadedcorrectly,perhapsthiswillhelp: http://machinelearningmastery.com/load-machine-learning-data-python/ Reply Rob March1,2021at11:59pm # Ahthanks,itturnsoutitwasanissuewiththewrongnumberofnodesonthesigmoidlayer. Reply JasonBrownlee March2,2021at5:45am # Happytohearyousolvedyourproblem! Reply Sofia February24,2021at3:53am # Anothergreattutorial!! WhenIruntheprogramitcrasheswithanerrorasseenbelow: 2021-02-2318:50:50.497125:Wtensorflow/stream_executor/platform/default/dso_loader.cc:59]Couldnotloaddynamiclibrary‘cudart64_101.dll’;dlerror:cudart64_101.dllnotfound 2021-02-2318:50:50.498601:Itensorflow/stream_executor/cuda/cudart_stub.cc:29]IgnoreabovecudartdlerrorifyoudonothaveaGPUsetuponyourmachine. Traceback(mostrecentcalllast): File“C:/Users/USER/PycharmProjects/Sofia/main.py”,line26,in X=dataset[:,0:8] File“C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\core\frame.py”,line3024,in__getitem__ indexer=self.columns.get_loc(key) File“C:\Users\USER\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\core\indexes\base.py”,line3080,inget_loc returnself._engine.get_loc(casted_key) File“pandas\_libs\index.pyx”,line70,inpandas._libs.index.IndexEngine.get_loc File“pandas\_libs\index.pyx”,line75,inpandas._libs.index.IndexEngine.get_loc TypeError:‘(slice(None,None,None),slice(0,8,None))’isaninvalidkey HowwouldIgoaboutfixingthiserror?Thankyouinadvance! Reply JasonBrownlee February24,2021at5:38am # Thanks! Sorrytohearthat,perhapsthesetipswillhelp: https://machinelearningmastery.com/faq/single-faq/why-does-the-code-in-the-tutorial-not-work-for-me Reply Slava February27,2021at3:46am # Itlookslikethemodel.predict_classes()wasdeprecatedon2021-01-01. Cheers, Slava Reply JasonBrownlee February27,2021at6:09am # Thanks. Reply AtsushiIsobe March3,2021at11:26pm # Whatisthenewmethodtouse?Icannotrunthepredictmethodafterfinishingthetraining. Reply JasonBrownlee March4,2021at5:50am # Thiswillhelp: https://machinelearningmastery.com/how-to-make-classification-and-regression-predictions-for-deep-learning-models-in-keras/ Reply Mitchell March11,2021at8:16am # Jason,Ihaveacoupleofquestionsregardingthelayersandhowtheychoosefilters. model=Sequential() model.add(Dense(12,input_dim=8,activation=’relu’)) model.add(Dense(8,activation=’relu’)) model.add(Dense(1,activation=’sigmoid’) 1)Whatisthefiltersizeforeachlayerabove?3×3or7×7. 2)Arethereanypre-defined3×3filters,7×7filers,? 3)Inhiddenlayers,filtersareusedtoproducenextlayerusually.Howdoesthemodelchoosefilters?Forexample,ifalayerhas16nodes,andhowwouldIchoose32filterssothatthenextlayerwillhave32nodes(neurons)? Whenyoucreateamodel,doyouneedtospecifyfiltersforeachlayerneeded?likesizeofafilterandhowmanyfilters.. Thanks! Reply JasonBrownlee March11,2021at1:25pm # TherearenofiltersinaDenselayer,filtersissomethingtodowithconvolutionallayers: https://machinelearningmastery.com/convolutional-layers-for-deep-learning-neural-networks/ Reply marineboy March12,2021at8:22pm # helloJason ihaveaproblem!canuhaveme: whenIpredict_classes(Z)#Z=[100,100,100,100,100,100,100,100]asyouseethisdatasodifferencebutoutputstill0or1.iwantoutput=don’tknowlabel:(((((howcanimakeitplshaveme thanksyousomuch,sir Reply JasonBrownlee March13,2021at5:29am # Sorry,Idon’tunderstand. Perhapsyoucanrephrasetheproblemyou’rehaving? Reply Franklin March17,2021at3:00pm # It’sanawesomeblog.Keepthegoodwork. Reply JasonBrownlee March18,2021at5:15am # Thanks! Reply Hamza March19,2021at12:38am # 79.53accuracy Reply JasonBrownlee March19,2021at6:23am # Welldone! Reply OriyomiRaheem March20,2021at6:06am # Iamtryingtotrainapermeabilitydatainlasfileandpredictthemafterwards.Pleasehelp Reply JasonBrownlee March21,2021at6:00am # Perhapsthisprocesswillhelpyoutoworkthroughyourproject: https://machinelearningmastery.com/start-here/#process Reply Bangash李忠勇 March31,2021at6:41pm # accuracy:0.7865 Accuracy:78.65 Reply JasonBrownlee April1,2021at8:08am # Welldone! Reply Pankaj April23,2021at7:19am # Withcategoricalfeatures,howwouldIpreventaKerasmodelfrommakingapredictionontestsamplesthatithasnotseeninthetrainingset,andinsteadeitheruseanothermodelorthrowanexception? Reply JasonBrownlee April24,2021at5:13am # Sorry,Idon’tunderstand.Perhapsyoucanelaborate? Reply Luca April26,2021at8:31pm # Allthecontentyoucreateandofferisabsolutelyamazing. Veryinformative,veryup-to-dateandcristal-clear. THANKYOU! Reply JasonBrownlee April27,2021at5:16am # You’rewelcome. Reply RonaldSsebadduka May5,2021at4:53pm # File“/Users/ronaldssebadduka/PycharmProjects/pythonProject1/venv/lib/python3.9/site-packages/numpy/lib/npyio.py”,line1067,inread_data items=[conv(val)for(conv,val)inzip(converters,vals)] File“/Users/ronaldssebadduka/PycharmProjects/pythonProject1/venv/lib/python3.9/site-packages/numpy/lib/npyio.py”,line1067,in items=[conv(val)for(conv,val)inzip(converters,vals)] File“/Users/ronaldssebadduka/PycharmProjects/pythonProject1/venv/lib/python3.9/site-packages/numpy/lib/npyio.py”,line763,infloatconv returnfloat(x) ValueError:couldnotconvertstringtofloat:‘\ufeff”6’ Iˆgetthiserrorwhenirunyourcode! HowcanIfixit? Reply JasonBrownlee May6,2021at5:42am # Sorrytohearthat,perhapssomeofthesetipswillhelp: https://machinelearningmastery.com/faq/single-faq/why-does-the-code-in-the-tutorial-not-work-for-me Reply Shilpa May28,2021at4:43am # Contentsareexplainedinasimplewayandaresoclear.ThanxJason Reply JasonBrownlee May28,2021at6:49am # You’rewelcome. Reply ToniNehme May28,2021at7:56pm # PleasepleasehelpmetobuildaMultilayerPerceptrontouseitforregressionproblem.Thankyou Reply JasonBrownlee May29,2021at6:50am # Sure,seethis: https://machinelearningmastery.com/regression-tutorial-keras-deep-learning-library-python/ Reply JamesMayr May29,2021at11:02pm # Thankyousooomuchforyourtutorial!IstruggledaroundwiththeinputlayerandtheKerashelpwasnothelpful.Butyourexplanationgavemetheinsightandthethingsbecametotalclear!Thatwasverygreat,Thankyou! Reply JasonBrownlee May30,2021at5:50am # You’rewelcome! Reply Meenakshi June3,2021at8:28pm # GreatworkSir.Simple,detailedexplanationofcomplexthings. IwouldliketolearnmodellingforDDoSattacksdetectioninNeuralnetworks.Pleasesuggesttheway. Tanksinadvance. Reply JasonBrownlee June4,2021at6:48am # Perhapsthetutorialsherewillhelpifyouaremodelingyourproblemasatimeseries: https://machinelearningmastery.com/start-here/#deep_learning_time_series Reply Meenakshi June5,2021at11:34pm # Thankyouverymuch.IwillgothroughitSir. Reply JasonBrownlee June6,2021at5:51am # You’rewelcome. Reply JC June24,2021at4:13am # Thefollowingaretheoutcomeofthefirst10consecutiveexecutionsonmy8GBRAM64bitWindows10platform: Accuracy:65.49 Accuracy:70.70 Accuracy:75.91 Accuracy:76.04 Accuracy:78.26 Accuracy:76.04 Accuracy:77.86 Accuracy:79.17 Accuracy:78.52 Accuracy:78.91 ThecomputerdoesnothaveGPU.Thescriptgivessomewarningmessages.Oneofthemis:“NoneoftheMLIROptimizationPassesareenabled(registered2)” Reply JasonBrownlee June24,2021at6:06am # Welldone! Reply Sneha July2,2021at8:31am # Hi, Ihaveaquestionregardingtheinputamount.Iamattemptingtofitaneuralnetworkforaclassificationmodel.However,thefeaturesinmymodelarecategoricalsoIneedtoone-hotencodethem.Forinstance,ifacategoricalvariablehas3valuesandIone-hotencodeit,wouldthatmake‘input_dim’1or3? Reply JasonBrownlee July3,2021at6:05am # Yes,categoricalvariableswillneedtobeencoded. 3categorieswillbecome3binaryinputvariableswhenusingaonehotencoding. Reply Rohan July3,2021at10:15am # Myresults: Accuracy:75.78 Accuracy:78.26 Accuracy:76.30 Accuracy:77.47 Accuracy:77.47 Reply JasonBrownlee July4,2021at5:58am # Welldone! Reply Patrick July10,2021at8:32pm # HiJason, Thankyouforallofyourcontent.AllveryinsightfulforsomeonenewtoKerasandmachinelearning.Ifyoucouldofferanyguidance/insightintothebelowproblemI’mtryingtotackle,thenitwouldbemuchappreciated. IamtryingtoreplicateasimilarBallPredictionModelasdiscussedhere: https://towardsdatascience.com/predicting-t20-cricket-matches-with-a-ball-simulation-model-1e9cae5dea22 Thisisamulticlassifcationproblem(thankyouforyourarticleonthis).Thereare8outputsthatIamtryingtopredict(0,1,2,3,4,6,Wide,Wicket)columnHinmydataset(https://i.stack.imgur.com/DmTNb.png). Thisdatasetisball-by-ball(match)dataofmanycricketmatches.ColumnsA-Garetheinputvariablesthatshouldbeusedtopredicttheprobabilityofeachoutcome(innings,over,batsman,bowleretc.) Model: X=my_data[:,0:7] y=my_data[:,7] model=Sequential() model.add(Dense(12,input_dim=7,activation=’relu’)) model.add(Dense(8,activation=’relu’)) model.add(Dense(1,activation=’sigmoid’)) model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’]) model.fit(X,y,epochs=150,batch_size=10,verbose=0) _,accuracy=model.evaluate(X,y,verbose=0) print(‘Accuracy:%.2f’%(accuracy*100)) Runningtheabovemodelontheball-by-balldatasetgivesanaccuracyof30%.Asthearticlesuggests,Iwanttoincludemoredatai.e.thehistoricalprobabilityofeachindividualbatsmanandbowlerachievingeachofthe8outcomes. ThismeansIhave3datasetswhichshouldbeusedtoinfluencetheprobabilityofeachoutcome. HowandwhenshouldIbetryingtointroducethese3linkeddatasets?Ipresumablywantthemodeltoconsiderallthisinformationatthesametimeandnotinisolation. Isitacaseoftryingtoincorporatethebatsman/bowlerdatasetsintothematch-by-matchdata?TheonlyissueIhavewiththisisthattherearec.200,000rowsofmatchdata,whereasaplayerdatabasewillhavec.500rows. MaybeIamwrong,andIshouldberunningthemultipledatasetsthroughthemodelindividuallyandthensomehowpoolingtheoutcomes–isthisevenpossible?AlthoughIdoubtthatthisisevenrecommended/worthwhile Ifyouhaveanysuggestionsonhowtoimprovetheabove,orachievethedesiredoutcome,thenitwouldbemostwelcomed. Thanksagainforallyourhardworkinmaintainingagreatdatasciencesite. Reply JasonBrownlee July11,2021at5:39am # Definingthedata/problemforamodelistherealworkinappliedmachinelearning. Thereisnogood/bestway,Irecommendreadingpapersonorrelatedtothetopictogetideas,prototype,experiment,etc. Also,thismayalsohelpondefiningtheproblem: https://machinelearningmastery.com/how-to-define-your-machine-learning-problem/ Also,moregenerally,thesetutorialsexplainhowtogetbetterperformancefromneuralnets: https://machinelearningmastery.com/start-here/#better Reply JoleneWang July23,2021at5:08am # HiJason! Thankyouforprovidingallofthiscontent.IamtryingtoreplicatethismodelbyusingmyowncsvfilehoweveritcontainsmanyNaNandthuscannotbeloadedthroughtheloadtxt()function.As0isaveryimportantnumberinmydataset,IcannotchangemyNAsto0.WhatcanIdo? Thankyouagainforallofyourhelp. Reply JasonBrownlee July23,2021at6:04am # Youmustimputethemissingvaluesfirst,therearemanymethods: https://machinelearningmastery.com/?s=missing&post_type=post&submit=Search Reply JoleneWang July23,2021at5:13am # IforgottomentionbutisthereawayformetokeeptheNaNinthedatasetandhavethemodelreaditasjustamissingvalue?ItwouldbedifficultformetoassigntheNaNsaspecificvalueasitcouldmessupthedataset. Reply JasonBrownlee July23,2021at6:04am # No.NaNwillcauseallcomputationtofailinamlmodel,includinganeuralnet. Reply IsiyakuSaleh July31,2021at10:09am # ThankverymuchDr,Jasonthetutorialhasreallyservedbewell. Reply JasonBrownlee August1,2021at4:49am # You’rewelcome! Reply TimPapa August3,2021at8:02pm # Thistutorialbuildsaneuralnetwork,butwhatspecificallythisneuralnetworkis?IsitanANNorCNNorRNN? Reply JasonBrownlee August4,2021at5:13am # Itisamulti-layerperceptron(MLP)whichisatypeoffeed-forwardneuralnetwork.ItisnotaCNNorRNN. Reply EdwinBrown August13,2021at7:26am # Firstandforemost,thankyouJasonBrownleeforgettingmestartedwithmyfirstdeeplearningproject.Ifollowedstep-by-stepandfoundmyselfstuckforawhile;however,aftercountlesshoursofresearchingIfoundmycodebelowtoworkforPython3.8.10,Tensorflow2.5.0,IPython7.26.0,andKeras2.6.0respectedenvironments.IapologizeifIovercommented,IwastakingnotesasIwasreadingthroughJason’ssourcecodesandnotes.IusedAnaconda-SpyderandIwantedtoseetheresultsaswellinJupyterNotebook.Ihopethishelps: importsys importtensorflowastf fromtensorflowimportkeras fromnumpyimportloadtxt fromtensorflow.keras.modelsimportSequential fromtensorflow.keras.layersimportDense #LoadthedataandsplittheX(input)&y(output)variables #Besureyourdataisintherespectedfileastheproject dataset=loadtxt(r’pima-indians-diabetes.csv’,delimiter=’,’) X=dataset[:,0:8] y=dataset[:,8] #Createoursequentialmodel #input_dimsetsnumberofarguementsforthenumberofinputvariables #Thisstructureasthreelayers #Fullyconnectedlayersaredefinedbythedenseclass #formoreondenseclassviewonKerashomepage #ReLUonthefirsttolayersandSigmoidfunctionontheoutputlayer(thirdlayer) #Defaultthresholdof0.5andbetterperformancefromReLU #ReLUmeasuresoutputbetween0and1asseeninprobability #Themodelexpectsrowsofdatawith8variables(theinput_dim=8argument) #Thefirsthiddenlayerhas12nodesandusesthereluactivationfunction. #Thesecondhiddenlayerhas8nodesandusesthereluactivationfunction. #Theoutputlayerhasonenodeandusesthesigmoidactivationfunction. model=Sequential() model.add(Dense(12,input_dim=8,kernel_initializer=’normal’,activation=’relu’)) model.add(Dense(8,kernel_initializer=’normal’,activation=’relu’)) model.add(Dense(1,kernel_initializer=’normal’,activation=’sigmoid’)) #Compilethemodel model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’]) #Fitthemodelontothedataset #Epoch:Onepassthroughalloftherowsinthetrainingdataset. #Batch:Oneormoresamplesconsideredbythemodelwithinanepochbeforeweightsareupdated. #TheCPUorGPUhandlesitfromhere,usually,largerdatasetsneedtheGPU model.fit(X,y,epochs=150,batch_size=10,verbose=0) #Evaluatethedata _,accuracy=model.evaluate(X,y,verbose=0) print(‘Accuracy:%.2f’%(accuracy*100)) #makeprobabilitypredictionswiththemodel predictions=model.predict(X) #roundpredictions rounded=[round(x[0])forxinpredictions] Reply AdrianTam August14,2021at2:33am # Goodwork! Reply Bonjour20 August15,2021at9:43pm # IuseWindowssystemonmylaptop,andIdonotknowifIshouldhaveaLinuxdestro>IamconfusedaboutwhereshouldIdownloadtheDataset>Hementioned:”onthesameplacewhereptyhonisinstalled”,whatisthisriddle? Itisariddleforabeginnerlikemecomingfromnontechnologicalbackground. Reply AdrianTam August17,2021at7:30am # Usuallythatmeans,youjustneedtoplacethedatafilesandthepythoncodefiletogetheratthesamefolder. Reply samasamaan August30,2021at6:19am # Hello Thanksforthisgreattutorial🙂 Questionno.1:canweapplydeeplearninginApacheSpark? Questionno.2:Ihavethefollowingdatasethttps://www.kaggle.com/leandroecomp/sdn-traffic Itriedthemulti-classclassificationcodebutitstopworking.Whatcouldbethereasonbehindthatfault? Thanks Reply AdrianTam September1,2021at7:39am # (1)yes(2)whatspecificallystoppedworking? Reply MALAVIKA September23,2021at11:17pm # Firstofall,Iamoverwhelmedbythenumberofcommentsandpromptrepliesbytheauthor.Youarereallyalifesavertomany,Jason. Now,Ihaveadoubt.Ihavebeensearchingforasimplefeed-forward-back-propagationANNcodeinpython,andIcouldseeonlyfeed-forwardneuralnetworkseverywhere.Inyourexample,isbackpropagationhappening?Doesn’tANNmeanboththeprocessesbydefault? Shouldn’tweapplybackpropagationinANN,normally? Reply AdrianTam September24,2021at4:41am # Feed-forwardhappenswhenyougiveinputtotheANN.Backpropagationhappenswhenyoucalculatethegradientandupdatetheweightsineachneuron. Reply MALAVIKA September24,2021at5:06pm # So,Isupposeit’s(back-propagation)nothappeningintheabovetutorial.Canyoushowushowtocodetheback-propagationinpython,ordirectmetoanypoststhatshowthesame? ThankYou. Reply AdrianTam September25,2021at4:36am # Whenyoucallfit()function,backpropagationisusedtoupdatethemodelparameters.That’spartofthetrainingprocess.Wedon’tnormallydothisexplicitly.Ifyouareinterested,seeatoyexamplehere:https://machinelearningmastery.com/implement-backpropagation-algorithm-scratch-python/ Reply Elham October8,2021at1:11am # Hi,Thanksalotforthisawesometutorial.I’musingtensorflowversion2.6andinmakingclasspredictionswiththemodelwiththeselinesofcode, predict_x=model.predict(X) classes_x=np.argmax(predict_x,axis=1) foriinrange(5): print(‘%s=>%d(expected%d)’%(X[i].tolist(),classes_x[i],y[i])) theoutpoutis: [6.0,148.0,72.0,35.0,0.0,33.6,0.627,50.0]=>0(expected1) [1.0,85.0,66.0,29.0,0.0,26.6,0.351,31.0]=>0(expected0) [8.0,183.0,64.0,0.0,0.0,23.3,0.672,32.0]=>0(expected1) [1.0,89.0,66.0,23.0,94.0,28.1,0.167,21.0]=>0(expected0) [0.0,137.0,40.0,35.0,168.0,43.1,2.288,33.0]=>0(expected1) Whyareallclasses_xzero? Reply AdrianTam October13,2021at5:09am # Becausethepredictionhereisabinaryone,hencepredict_xisNx1matrixwhichargmaxwillonlyreport0.Yoursyntaxiscorrectformulti-class,whichtheneuralnetworkhasoutputlayerasDense(n)withn>1 I’veupdatedthesamplecodeheretoreflectwhatyoushoulddo.Thanksforalertingme. Reply christoper October17,2021at6:41am # hellothisishelpful.Iamstudyingneuralnetworksandimjustabeginner.Yousaidthisismlptypeofneuralnetworkright?Ijustwanttoask,howaboutthis?Whatkindofneuralnetworkarchitectureusedhere?isitrnn?ann?orltstm?linkbelow: https://towardsdatascience.com/how-to-create-a-chatbot-with-python-deep-learning-in-less-than-an-hour-56a063bdfc44 Reply AdrianTam October20,2021at8:52am # MLP=MultilayerPerceptron,whichusuallymeansaneuralnetworkwith3ormorelayers.ThelinkyouprovideduseDense(),whichisfully-connectedlayer.HenceitisalsoMLP. Reply Flo October25,2021at11:29pm # HiJasonandAdrian,Icameacrossyourverynicetutorial,becauseIhaveaquitesimilarproblem. Ihaveacoupleofnumericalprocessparametersofanengineeringproblem(similartoyourinputparametershere),whichIwanttochecktoanoutcomevalue(whichisdifferenttoyourtutorialagainanumericalvalue,notaclassification).Canyoutellme(ordoyouevennowaaccordinglyhandytutoriallikethisone),howIneedtomodifythecode? Thanksalot! Reply AdrianTam October27,2021at2:23am # Itsoundstomethatitisaregressionprobleminsteadofclassificationproblem.Inthiscase,twothingsyoumayconsidertochange 1.ThelastDense()layer,youmaywantadifferentactivation(e.g.,linear?)becausesigmoidalisboundedbetween0and1 2.model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’])shouldhavealossandmetricchanged.Forexample,youmayconsidertouseMSEbecausecrossentropyandaccuracyaremeasuresspecifictoclassification Reply DrShaziaSaqib October28,2021at3:14am # awesome,greatservice,veryhelpful,amsharingwithmystudents,LordBlessyouameen Reply veejay November5,2021at11:23pm # Awesometutorial,verywell-detailed.Ihaveaquestionthough, HowtoimproveValidationLossandValidationAccuracy?IamverynewtoNeuralNetwork.Ionlyscratchedthesurfaceofit.Weights,biases,activationfunction,lossfunction,architecturesandhowtobuildlayersonkerasandotherfundamentalterminologies(thanksfromyouanddeeplizardtutorialsfromyoutube.)IamstudyingandpracticingitandIwanttotryandreplicatesomeprojectandIcameacrossthistutorialfromDataflairwherehe’screatingachatbotandItriedtoimitateit.LINK:https://data-flair.training/blogs/python-chatbot-project/. SofromwhatIhaveobservedandbasedonmylearnings,themodelthathecreatedisanANN-MLP.Myproblemis,whenItrainedthemodelandsetthevalidation_split=0.3,thetraininglossandaccuracyaregoodbutthevalidationlossandaccuracydotheopposite.Iknowthatitmaybeanoverfittingproblemso… Here’swhatIdid: -addedregularizationwithL2 –SlowedthelearningrateandIalsotriedtospeeditup -Dropouts(0.2-0.5) -BatchSize -Removinglayers -Addinglayers -Experimenteddifferentactivationandlossfunctions(sigmoid,softplus,binary_crossentropy) -Ieventriedtoadddataonmydatasets(from320to796inputs) Itriedallofthisbutval_lossandval_accstillhighandlowrespectively. (BestthatIdidisloss:0.1/accuracy98percentvalloss:1.9/val_accuracy:52percent. whiletheworstisval_loss:over3.0andval_accuracy35-40percent) Thedatasetthati’musingisfromdataflairbutIexpandedit.here’smyvisualizedmodel:https://i.stack.imgur.com/HE1jU.png Reply AdrianTam November7,2021at10:35am # Can’treallytellwhatwentwronghere.Didyouverifythevalidationlossasyoutrainedit?Atfirst,thetraininglossandvalidationlossshouldbeequallybad.Howdidtheyprogressedineachtrainingepoch?Thismaygiveyouclues. Reply Veejay November9,2021at6:54am # YesIbothtrainedandvalidatethem.Theyareequallybadatfirstandastheyprogressed,thelossimprovedbymilesbutval_lossandval_accuracyimprovedaninch.T_T Reply AdrianTam November14,2021at1:36pm # That’sexpected.Youmodelwaslookingatthetraininglossandtrytoimproveitself,butitwasnotabletoseethevalidationdatasoitisharderandslowertoimprove. Reply Mak November17,2021at6:24pm # YourbookshelpedmeunderstandLSTMsgreatly,Iamhavingtroublewithdevelopinganattentionlayer,pleasecanyoudoatutorialonusingAttention/MultiheadAttention Thankyou. Reply AdrianTam November18,2021at5:38am # Pleaseseetheseries:https://machinelearningmastery.com/category/attention/ Reply NikhilGupta November25,2021at5:47pm # TheaccuracyfromANNforthisdatasetisbetween70-78%.UsingLogisticsRegression,wearegetting78%accuracyforthesamedataset.So,what’stheadvantageofusingANN? Reply AdrianTam November26,2021at2:09am # ANNismoreflexible.Occam’srazor–youusethesimplestmodelforthejob.Iflogisticregressionfitswell,youhavenoreasontouseANN.Itusemorememoryandrunsslower. Reply Flo December3,2021at8:38pm # ThanksfortheTutorial Itriedyourapproachanditworkednicelyonmydata.ForafirstshotIjustuseddata,whichismeasuredaftertheprocess(e.g.processtime,temperaturedifferenceduringtheprocess,etc.).Forafurther,deeperinvestigation,Iwouldliketousemeasureddatacurves,forexamplethedevelopmentoftheprocesstemperaturebytimeduringtheprocessitself.Byuseofthesecurves,Iexpectahigherdegreeofinformation. Couldyouprovideahint,howtoworkwiththisdata?ForthefirstshotIsimplygeneratedatablewithmyprocessparametersinthefirst6columnsandmyoutputvalueincolumn7,whichcouldbeeasilyfeededintothemodell. Thanksalot! Reply AdrianTam December8,2021at6:59am # Everythingsoundsstraightforwardtome.Didyoutriedimplementedthis?Anyerror? Reply Flo December10,2021at6:32pm # Tobehonest,Ihavenocluehowtoprovidethedata.Inthefirstcase,Ihadatablewith7columns:6Inputprocessparameterandonecolumnwithoutputvalues. NowIwouldliketoreplace(areadd)someinputcolumnswithtime-recordeddatacurves,whicharesomehowtables(firstcolumnthetimestamp,secondcolumnthetime-specificprocessparameter)itself.HowdoIworkwiththis? Reply AdrianTam December15,2021at5:38am # UsuallyIwouldhavepandastoprocessdataandconvertittonumpyarraybeforefeedingtoKerasmodel.Pandasallowsyoutomanipulatetableseasier Reply Rick December28,2021at7:46am # MayneedtoadjusttheimportsettingsforcompatibilitywithnewerTensorflowversions. Insteadof: … fromkeras.modelsimportSequential fromkeras.layersimportDense Use: … fromtensorflow.keras.modelsimportSequential fromtensorflow.keras.layersimportDense SolvedmyissueswithConda. Thanksfortheexcellenttutorialsandarticles!! Reply JamesCarmichael December29,2021at11:44am # ThankyouforthefeedbackRick!IalsooftentrytoruncodeinbothAnacondaandGoogleColabtoidentifyandcorrectcompatibilityissues. Reply Preeti February10,2022at4:18pm # MyAccuracy:76.95 Thankyouforthecodeanddetailedexplanation Reply JamesCarmichael February11,2022at8:35am # Youareverywelcome,Preeti!Keepupthegreatwok! Reply Alan March9,2022at8:46pm # HiJames Greatwork Nevermindneuralnetworks,thisiscausingmealotofdeepthinking. Iamrunningyourtutorialonapi400with64bitOSonThonny. Worksreasonablywellonthismachine. Howevercameacrossanerrorinoneofyourexamples…Kerasneuralnetworkusing‘pima-indians-diabetes.csv’ ”fromtensorflow.python.eager.contextimportget_config ImportError:cannotimportname‘get_config’from‘tensorflow.python.eager.context’(/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/context.py)” SodiscoveredthatthefaultlaywithKeras.modelsandlayersandhaverejiggedthesketchasfollows:- #firstneuralnetworkwithkerastutorial fromnumpyimportloadtxt fromtensorflow.kerasimportmodels,layers#******************** #fromkeras.modelsimportSequential#****************** #fromkeras.layersimportDense#******************** #loadthedataset dataset=loadtxt(‘/home/pi/Documents/pima-indians-diabetes.csv’,delimiter=’,’) #splitintoinput(X)andoutput(y)variables X=dataset[:,0:8] y=dataset[:,8] #definethekerasmodel model=models.Sequential()#******************** model.add(layers.Dense(12,input_dim=8,activation=’relu’))#******************** model.add(layers.Dense(8,activation=’relu’))#******************** model.add(layers.Dense(1,activation=’sigmoid’))#******************** #compilethekerasmodel model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’]) #fitthekerasmodelonthedataset model.fit(X,y,epochs=150,batch_size=10) #evaluatethekerasmodel _,accuracy=model.evaluate(X,y) print(‘Accuracy:%.2f’%(accuracy*100)) Nowthatproduces Accuracy:74.35 Reply JamesCarmichael March10,2022at10:37am # HiAlan…Thankyouforthefeedbackandsupport!InterestingapplicationtotheRaspberryPi!Keepinmindthatourimplementationsmaynotbefullycompatiblewiththelibrariesthataredevelopedforthatplatform.Keepupthegreatwork! Reply Nishanth March14,2022at3:38am # Hi, Amazingtutorial!Simpleandeasy.Itriedthesamethingonmydatasetbutthelastforloopdoesnotseemtowork.Couldplshelpmewithit? Hereistheforloop: foriinrange(5): print(‘%s=>%d(expected%d)’%(X[i].tolist(),predictions[i],y[i])) Thanks Reply JamesCarmichael March14,2022at11:48am # HiNishanth…areyoucopyingandpastingthecodeortypingitin?Becarefulregardingcopyingandpastingcodeandhowitmayaffectthecodelayoutaserrorsmaybeverydifficulttospotvisually. Reply Nishanth March14,2022at3:40am # Hihereinthecommenttheprintstatementlooksun-indentedbutinmycode,Iindentitandstilldoesnotwork. Reply JamesCarmichael March14,2022at11:52am # HiNishanth…pleaseseepreviousreplies. Reply Nishanth March14,2022at3:45am # Hi, AmazingTutorial!SimpleandEasytofollow.Itrieditonmydatasetbutthelastforloopthatprintsfirst5examplesdoesnotwork.ItgivesmeKeyError:0 Couldyouhelpmewithit? Thanks Reply JamesCarmichael March14,2022at11:47am # HiNishanth…pleasesharethefullerrormessagesowecanbetterassistyou. Reply Nishanth March14,2022at11:41pm # Foundawayout.Thingisthatherethedatasetisnumpyarrayandminewasapandas.DataFrame.Thanksforthehelp. Reply NVRaman April2,2022at1:51am # HelloJason, Reallywonderfultutorial WhenIranthecodeeverythingworkedexceptwhileprintingthepredictionsIgetakeyerror. Reply JamesCarmichael April2,2022at12:18pm # HiNV…Canyouprovidetheexacterrormessagesothatwecanbetterassistyou? Reply Susia April9,2022at1:30am # Hi,I’velearnedthesametutorialtodevelopthefirstneuralnetinKerasinoneofyourmini_courses.Todevelopmyownmodelonmyowndataset,I’vetriedtoadaptthistutorial.TheproblemismytargetYiscountdata(numberoftrafficflowforexample).Inmycase,howtodefinetheactivationfunctionfortheoutputlayer.Isitrelu?Howtochoosethelossfunction?I’vetriedMeanSquaredError,thelossvalueisquitelarge,orcategorical_crossentropy,thelossvalueisnan.IamconsideringtoorderthecompletebookofDeepLearningWithPython.What’sthedifferenceofthetutorialsinsidethebookandthemini_course? Reply JamesCarmichael April9,2022at8:34am # HiSusia…Thefollowingresourcemayaddclarityinhowtochooseanactivationfunction: https://machinelearningmastery.com/choose-an-activation-function-for-deep-learning/ Reply Nasrin April23,2022at4:32am # Isir,thanksamillionforyourawesomepost couldyoupleaseexplainhowwecandivideXandyintothetrainandtestsampleindeeplearning? thiscodeiscorrecthere? fromsklearn.model_selectionimporttrain_test_split X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.33,random_state=42) Reply JamesCarmichael April24,2022at3:25am # HiNasrin…thesamplecodeyouprovidedlooksaccurate.Feelfreetoimplementitandletusknowifyouencounteranyissues. Reply ShivaManhar April23,2022at3:25pm # 24/24[==============================]–0s489us/step–loss:0.4517–accuracy:0.7956 Accuracy:79.56 Reply JackSparrow June3,2022at5:38am # DeepLearningwithkerasmnistdataset: fromcgiimporttest frompyexpatimportmodel importnumpyasnp fromkeras.modelsimportSequential fromkerasimportlayers #fromkeras.layersimportConvolution2D,MaxPooling2D#trainonimagedata fromkeras.utilsimportnp_utils#veridönüşümüiçingerekli fromkeras.datasetsimportmnist#imagedata (X_train,y_train),(X_test,y_test)=mnist.load_data() print(“Reshapeöncesi”,X_train.shape) X_train=X_train.reshape(-1,28,28,1) X_test=X_test.reshape(-1,28,28,1) print(“Reshapesonrası”,X_train.shape) X_train=X_train.astype(‘float32’) X_test=X_test.astype(‘float32′) X_train/=255 X_test/=255 Y_train=np_utils.to_categorical(y_train) Y_test=np_utils.to_categorical(y_test) model=Sequential() model.add(layers.Convolution2D(32,3,3,activation=’relu’,input_shape=(28,28,1))) model.add(layers.Convolution2D(32,3,3,activation=’relu’)) model.add(layers.MaxPooling2D(pool_size=(2,2))) model.add(layers.Dropout(0.25)) model.add(layers.Flatten()) model.add(layers.Dense(128,activation=’relu’)) model.add(layers.Dropout(0.5)) model.add(layers.Dense(10,activation=’softmax’)) model.compile(loss=’categorical_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’]) model.fit(X_train,Y_train, batch_size=32,epochs=10,verbose=1) test_loss,test_acc=model.evaluate(X_test,Y_test,verbose=0) print(“TestLoss”,test_loss) print(“TestAccuracy”,test_acc) DeepLearningwithdata_diagnosisdataset: importimp frompickletoolsimportoptimize fromrandomimportrandom fromstatisticsimportmode fromtabnannyimportverbose fromwarningsimportfilters frommatplotlib.pyplotimportaxis importpandasaspd importnumpyasnp dataSet=pd.read_csv(“.\data_diagnosis.csv”) dataSet.drop([“id”,”Unnamed:32″],axis=1,inplace=True) dataSet.diagnosis=[1ifeach==“M”else0foreachindataSet.diagnosis] y=dataSet.diagnosis.values x_data=dataSet.drop([“diagnosis”],axis=1) x_data.astype(“uint8”) fromsklearn.preprocessingimportStandardScaler scaler=StandardScaler() x=scaler.fit_transform(x_data) fromkeras.utilsimportto_categorical Y=to_categorical(y) fromsklearn.model_selectionimporttrain_test_split trainX,testX,trainy,testy=train_test_split(x,Y,test_size=0.2,random_state=42) trainX=trainX.reshape(trainX.shape[0],testX.shape[1],1) testX=testX.reshape(testX.shape[0],testX.shape[1],1) fromkerasimportlayers fromkerasimportSequential verbose,epochs,batch_size=0,10,8 n_features,n_outputs=trainX.shape[1],trainy.shape[1] model=Sequential() input_shape=(trainX.shape[1],1) model.add(layers.Conv1D(filters=8,kernel_size=5,activation=’relu’,input_shape=input_shape)) model.add(layers.BatchNormalization()) model.add(layers.MaxPooling1D(pool_size=3)) model.add(layers.Conv1D(filters=16,kernel_size=5,activation=’relu’)) model.add(layers.BatchNormalization()) model.add(layers.MaxPooling1D(pool_size=2)) model.add(layers.Flatten()) model.add(layers.Dense(200,activation=’relu’)) model.add(layers.Dense(n_outputs,activation=’softmax’)) model.summary() print(‘başladı’) importkeras importtensorflow #model.compile(loss=’categorical_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’]) model.compile(loss=’binary_crossentropy’, optimizer=tensorflow.keras.optimizers.Adam(), metrics=[‘accuracy’])#编译 dataSet.info() model.fit(trainX,trainy,epochs=epochs,verbose=1) _,accuracy=model.evaluate(testX,testy,verbose=0) print(accuracy) Reply JamesCarmichael June3,2022at9:12am # ThankyouforthefeedbackJack!Keepupthegreatwork! Reply Jack June17,2022at5:25am # 24/24[==============================]–0s1ms/step [6.0,148.0,72.0,35.0,0.0,33.6,0.627,50.0]=>1(expected1) [1.0,85.0,66.0,29.0,0.0,26.6,0.351,31.0]=>0(expected0) [8.0,183.0,64.0,0.0,0.0,23.3,0.672,32.0]=>1(expected1) [1.0,89.0,66.0,23.0,94.0,28.1,0.167,21.0]=>0(expected0) [0.0,137.0,40.0,35.0,168.0,43.1,2.288,33.0]=>1(expected1) myaccuracyis77.99butthisshowsit100isthisright? Reply JamesCarmichael June17,2022at9:28am # ThankyouforthefeedbackJack! Reply NicolaMenga June22,2022at5:53pm # Hi. Thankyouforthistutorial.Itisveryuseful. Ihaveaquestion.Thisisatutorialforabinaryclassificationpurpose. However,IwanttobuildaFeedForwardNeuralNetworkwhichpredictsmorethanonevariable(morethanoneneuronintheoutputlayer),whichhaveavaluebetween0and1(forexample0.956,0.878,0.897andsoon),unlikethecaseofthistutorial,inwhichthevariabletobepredictedtakesonlythevalues0or1. Itriedtoapplythenetworkdevelopedinthistutorialforthispurpose,butresultsarebad. Mytestdatasethave257observations.IfIapplythisnetwork,thepredictionarrayisconstitutedby257values(oneforeachobservation),butthesevaluesareallthesame(forexample1:0.985;2:0.985;3:0.985;…;256:0.985;257:0.985).IhopeIexplained. Thereisakerasmodel/functionadequateformyproblem(i.e.thepredictionofavariablewhichisnot0or1)? Thankyouforyourhelp. NicolaMenga. Reply JamesCarmichael June23,2022at10:59am # HiNicola…Pleaseclarifyand/orelaborateonyourquestionsothatwemaybetterassistyou. Reply Sadegh July7,2022at3:49am # Hithere, IalwaysgetwarningwhenI’musingNNmodelthatismadewithkerasinanaconda’sspyderconsul. Thewarningisasfollow: WARNING:AutoGraphcouldnottransformandwillrunitas-is. Cause:Unabletolocatethesourcecodeof.Notethatfunctionsdefinedincertainenvironments,liketheinteractivePythonshell,donotexposetheirsourcecode.Ifthatisthecase,youshoulddefinethemina.pysourcefile.Ifyouarecertainthecodeisgraph-compatible,[email protected]_not_convert.Originalerror:linenoisoutofbounds Tosilencethiswarning,[email protected]_not_convert Ireallyappreciateifyoucanhelpmeoutofthis. Reply LeaveaReplyClickheretocancelreply.Comment*Name(required) Email(willnotbepublished)(required) Δ Welcome! I'mJasonBrownleePhD andIhelpdevelopersgetresultswithmachinelearning. Readmore Nevermissatutorial:                 Pickedforyou: YourFirstDeepLearningProjectinPythonwithKerasStep-By-StepHowtoGridSearchHyperparametersforDeepLearningModelsinPythonWithKerasRegressionTutorialwiththeKerasDeepLearningLibraryinPythonMulti-ClassClassificationTutorialwiththeKerasDeepLearningLibraryHowtoSaveandLoadYourKerasDeepLearningModel LovingtheTutorials? TheDeepLearningwithPythonEBookiswhereyou'llfindtheReallyGoodstuff. >>SeeWhat'sInside



請為這篇文章評分?