Deep learning | Tree of Knowledge Wiki - Fandom
文章推薦指數: 80 %
Deep learning is a class of machine learning algorithms that uses multiple layers to progressively extract higher level features from the raw input. TreeofKnowledgeWiki Explore MainPage Discuss AllPages Community InteractiveMaps RecentBlogPosts Math: Elementarymathematics Introductorymathematics Intermediatemathematics Numbers Functions Discretemathematics Spaces Semi-advancedmathematics Advancedmathematics Physics: Elementaryphysics IntermediatePhysics Mass Weather Naturalunits Exoplanets StructureofEarth Astrophysics Elementarychemistry Abundanceoftheelements Nuclides Intermediatechemistry Periodictable Life Originoflife Elementarymolecularbiology Cellstructure Cellnucleus Embryonicdevelopment Treeoflife Gymnosperms Angiosperms Insects Rodentia Even-toedungulates Odd-toedungulates Carnivora Humanevolution History: Historyoftheuniverse HistoryofEarth HistoryoflifeonEarth HistoryofGlaciations Historyofmodernman Empires HistoryofChina HistoryofMongolia HistoryofIndia HistoryofMesopotamia HistoryofEgypt HistoryofGreece HistoryofRome HistoryofEngland HistoryofMesoamerica Contemporaryhistory HistoryofUSmilitaryoperations Hitler Cubanmissilecrisis Counterculture FANDOM Games Anime Movies TV Video Wikis ExploreWikis CommunityCentral StartaWiki Register Don'thaveanaccount? SignIn Advertisement in: Neuralnets Deeplearning Viewsource History Talk(0) watch 01:38 ThorLoveandThunder-TheLoop Doyoulikethisvideo? PlaySound Contents 1Definition 2Overview 3Interpretations 4History 4.1Deeplearningrevolution 5Neuralnetworks 5.1Artificialneuralnetworks 5.2Deepneuralnetworks 5.2.1Challenges 6Applications 6.1Automaticspeechrecognition 6.2Imagerecognition 6.3Visualartprocessing 6.4Naturallanguageprocessing 6.5Drugdiscoveryandtoxicology 6.6Customerrelationshipmanagement 6.7Recommendationsystems 6.8Bioinformatics 6.9MedicalImageAnalysis 6.10Mobileadvertising 6.11Imagerestoration 6.12Financialfrauddetection 6.13Military 7Relationtohumancognitiveandbraindevelopment 8Commercialactivity 9Criticismandcomment 9.1Theory 9.2Errors 9.3Cyberthreat 9.4Relianceonhumanmicrowork 10References Deeplearning(alsoknownasdeepstructuredlearning)ispartofabroaderfamilyofmachinelearningmethodsbasedonartificialneuralnetworkswithrepresentationlearning.Learningcanbesupervised,semi-supervisedorunsupervised. Deeplearningarchitecturessuchasdeepneuralnetworks,deepbeliefnetworks,recurrentneuralnetworksandconvolutionalneuralnetworkshavebeenappliedtofieldsincludingcomputervision,speechrecognition,naturallanguageprocessing,audiorecognition,socialnetworkfiltering,machinetranslation,bioinformatics,drugdesign,medicalimageanalysis,materialinspectionandboardgameprograms,wheretheyhaveproducedresultscomparabletoandinsomecasessurpassinghumanexpertperformance. Artificialneuralnetworks(ANNs)wereinspiredbyinformationprocessinganddistributedcommunicationnodesinbiologicalsystems.ANNshavevariousdifferencesfrombiologicalbrains.Specifically,neuralnetworkstendtobestaticandsymbolic,whilethebiologicalbrainofmostlivingorganismsisdynamic(plastic)andanalog. Theadjective"deep"indeeplearningcomesfromtheuseofmultiplelayersinthenetwork.Earlyworkshowedthatalinearperceptroncannotbeauniversalclassifier,andthenthatanetworkwithanonpolynomialactivationfunctionwithonehiddenlayerofunboundedwidthcanontheotherhandsobe.Deeplearningisamodernvariationwhichisconcernedwithanunboundednumberoflayersofboundedsize,whichpermitspracticalapplicationandoptimizedimplementation,whileretainingtheoreticaluniversalityundermildconditions.Indeeplearningthelayersarealsopermittedtobeheterogeneousandtodeviatewidelyfrombiologicallyinformedconnectionistmodels,forthesakeofefficiency,trainabilityandunderstandability,whencethe"structured"part. Definition RepresentingImagesonMultipleLayersofAbstractioninDeepLearning Deeplearningisaclassofmachinelearningalgorithmsthatusesmultiplelayerstoprogressivelyextracthigherlevelfeaturesfromtherawinput.Forexample,inimageprocessing,lowerlayersmayidentifyedges,whilehigherlayersmayidentifytheconceptsrelevanttoahumansuchasdigitsorlettersorfaces. Overview Mostmoderndeeplearningmodelsarebasedonartificialneuralnetworks,specifically,ConvolutionalNeuralNetworks(CNN)s,althoughtheycanalsoincludepropositionalformulasorlatentvariablesorganizedlayer-wiseindeepgenerativemodelssuchasthenodesindeepbeliefnetworksanddeepBoltzmannmachines. Indeeplearning,eachlevellearnstotransformitsinputdataintoaslightlymoreabstractandcompositerepresentation.Inanimagerecognitionapplication,therawinputmaybeamatrixofpixels;thefirstrepresentationallayermayabstractthepixelsandencodeedges;thesecondlayermaycomposeandencodearrangementsofedges;thethirdlayermayencodeanoseandeyes;andthefourthlayermayrecognizethattheimagecontainsaface.Importantly,adeeplearningprocesscanlearnwhichfeaturestooptimallyplaceinwhichlevelonitsown.(Ofcourse,thisdoesnotcompletelyeliminatetheneedforhand-tuning;forexample,varyingnumbersoflayersandlayersizescanprovidedifferentdegreesofabstraction.) Theword"deep"in"deeplearning"referstothenumberoflayersthroughwhichthedataistransformed.Moreprecisely,deeplearningsystemshaveasubstantialcreditassignmentpath(CAP)depth.TheCAPisthechainoftransformationsfrominputtooutput.CAPsdescribepotentiallycausalconnectionsbetweeninputandoutput.Forafeedforwardneuralnetwork,thedepthoftheCAPsisthatofthenetworkandisthenumberofhiddenlayersplusone(astheoutputlayerisalsoparameterized).Forrecurrentneuralnetworks,inwhichasignalmaypropagatethroughalayermorethanonce,theCAPdepthispotentiallyunlimited.Nouniversallyagreeduponthresholdofdepthdividesshallowlearningfromdeeplearning,butmostresearchersagreethatdeeplearninginvolvesCAPdepthhigherthan2.CAPofdepth2hasbeenshowntobeauniversalapproximatorinthesensethatitcanemulateanyfunction.Beyondthat,morelayersdonotaddtothefunctionapproximatorabilityofthenetwork.Deepmodels(CAP>2)areabletoextractbetterfeaturesthanshallowmodelsandhence,extralayershelpinlearningthefeatureseffectively. Deeplearningarchitecturescanbeconstructedwithagreedylayer-by-layermethod.Deeplearninghelpstodisentangletheseabstractionsandpickoutwhichfeaturesimproveperformance. Forsupervisedlearningtasks,deeplearningmethodseliminatefeatureengineering,bytranslatingthedataintocompactintermediaterepresentationsakintoprincipalcomponents,andderivelayeredstructuresthatremoveredundancyinrepresentation. Deeplearningalgorithmscanbeappliedtounsupervisedlearningtasks.Thisisanimportantbenefitbecauseunlabeleddataaremoreabundantthanthelabeleddata.Examplesofdeepstructuresthatcanbetrainedinanunsupervisedmannerareneuralhistorycompressorsanddeepbeliefnetworks. Interpretations Deepneuralnetworksaregenerallyinterpretedintermsoftheuniversalapproximationtheoremorprobabilisticinference. Theclassicuniversalapproximationtheoremconcernsthecapacityoffeedforwardneuralnetworkswithasinglehiddenlayeroffinitesizetoapproximatecontinuousfunctions.In1989,thefirstproofwaspublishedbyGeorgeCybenkoforsigmoidactivationfunctionsandwasgeneralisedtofeed-forwardmulti-layerarchitecturesin1991byKurtHornik.Recentworkalsoshowedthatuniversalapproximationalsoholdsfornon-boundedactivationfunctionssuchastherectifiedlinearunit. Theuniversalapproximationtheoremfordeepneuralnetworksconcernsthecapacityofnetworkswithboundedwidthbutthedepthisallowedtogrow.Luetal.provedthatifthewidthofadeepneuralnetworkwithReLUactivationisstrictlylargerthantheinputdimension,thenthenetworkcanapproximateanyLebesgueintegrablefunction;Ifthewidthissmallerorequaltotheinputdimension,thendeepneuralnetworkisnotauniversalapproximator. Theprobabilisticinterpretationderivesfromthefieldofmachinelearning.Itfeaturesinference,aswellastheoptimizationconceptsoftrainingandtesting,relatedtofittingandgeneralization,respectively.Morespecifically,theprobabilisticinterpretationconsiderstheactivationnonlinearityasacumulativedistributionfunction.Theprobabilisticinterpretationledtotheintroductionofdropoutasregularizerinneuralnetworks.TheprobabilisticinterpretationwasintroducedbyresearchersincludingHopfield,WidrowandNarendraandpopularizedinsurveyssuchastheonebyBishop. History Thefirstgeneral,workinglearningalgorithmforsupervised,deep,feedforward,multilayerperceptronswaspublishedbyAlexeyIvakhnenkoandLapain1967.A1971paperdescribedadeepnetworkwitheightlayerstrainedbythegroupmethodofdatahandling.Otherdeeplearningworkingarchitectures,specificallythosebuiltforcomputervision,beganwiththeNeocognitronintroducedbyKunihikoFukushimain1980. ThetermDeepLearningwasintroducedtothemachinelearningcommunitybyRinaDechterin1986,andtoartificialneuralnetworksbyIgorAizenbergandcolleaguesin2000,inthecontextofBooleanthresholdneurons. In1989,YannLeCunetal.appliedthestandardbackpropagationalgorithm,whichhadbeenaroundasthereversemodeofautomaticdifferentiationsince1970,toadeepneuralnetworkwiththepurposeofrecognizinghandwrittenZIPcodesonmail.Whilethealgorithmworked,trainingrequired3days. By1991suchsystemswereusedforrecognizingisolated2-Dhand-writtendigits,whilerecognizing3-Dobjectswasdonebymatching2-Dimageswithahandcrafted3-Dobjectmodel.Wengetal.suggestedthatahumanbraindoesnotuseamonolithic3-Dobjectmodelandin1992theypublishedCresceptron,amethodforperforming3-Dobjectrecognitioninclutteredscenes.Becauseitdirectlyusednaturalimages,Cresceptronstartedthebeginningofgeneral-purposevisuallearningfornatural3Dworlds.CresceptronisacascadeoflayerssimilartoNeocognitron.ButwhileNeocognitronrequiredahumanprogrammertohand-mergefeatures,Cresceptronlearnedanopennumberoffeaturesineachlayerwithoutsupervision,whereeachfeatureisrepresentedbyaconvolutionkernel.Cresceptronsegmentedeachlearnedobjectfromaclutteredscenethroughback-analysisthroughthenetwork.Maxpooling,nowoftenadoptedbydeepneuralnetworks(e.g.ImageNettests),wasfirstusedinCresceptrontoreducethepositionresolutionbyafactorof(2x2)to1throughthecascadeforbettergeneralization. In1994,AndrédeCarvalho,togetherwithMikeFairhurstandDavidBisset,publishedexperimentalresultsofamulti-layerbooleanneuralnetwork,alsoknownasaweightlessneuralnetwork,composedofa3-layersself-organisingfeatureextractionneuralnetworkmodule(SOFT)followedbyamulti-layerclassificationneuralnetworkmodule(GSN),whichwereindependentlytrained.Eachlayerinthefeatureextractionmoduleextractedfeatureswithgrowingcomplexityregardingthepreviouslayer. In1995,BrendanFreydemonstratedthatitwaspossibletotrain(overtwodays)anetworkcontainingsixfullyconnectedlayersandseveralhundredhiddenunitsusingthewake-sleepalgorithm,co-developedwithPeterDayanandHinton.Manyfactorscontributetotheslowspeed,includingthevanishinggradientproblemanalyzedin1991bySeppHochreiter. Simplermodelsthatusetask-specifichandcraftedfeaturessuchasGaborfiltersandsupportvectormachines(SVMs)wereapopularchoiceinthe1990sand2000s,becauseofartificialneuralnetwork's(ANN)computationalcostandalackofunderstandingofhowthebrainwiresitsbiologicalnetworks. Bothshallowanddeeplearning(e.g.,recurrentnets)ofANNshavebeenexploredformanyyears.Thesemethodsneveroutperformednon-uniforminternal-handcraftingGaussianmixturemodel/HiddenMarkovmodel(GMM-HMM)technologybasedongenerativemodelsofspeechtraineddiscriminatively.Keydifficultieshavebeenanalyzed,includinggradientdiminishingandweaktemporalcorrelationstructureinneuralpredictivemodels.Additionaldifficultieswerethelackoftrainingdataandlimitedcomputingpower. Mostspeechrecognitionresearchersmovedawayfromneuralnetstopursuegenerativemodeling.AnexceptionwasatSRIInternationalinthelate1990s.FundedbytheUSgovernment'sNSAandDARPA,SRIstudieddeepneuralnetworksinspeechandspeakerrecognition.ThespeakerrecognitionteamledbyLarryHeckreportedsignificantsuccesswithdeepneuralnetworksinspeechprocessinginthe1998NationalInstituteofStandardsandTechnologySpeakerRecognitionevaluation.TheSRIdeepneuralnetworkwasthendeployedintheNuanceVerifier,representingthefirstmajorindustrialapplicationofdeeplearning. Theprincipleofelevating"raw"featuresoverhand-craftedoptimizationwasfirstexploredsuccessfullyinthearchitectureofdeepautoencoderonthe"raw"spectrogramorlinearfilter-bankfeaturesinthelate1990s,showingitssuperiorityovertheMel-Cepstralfeaturesthatcontainstagesoffixedtransformationfromspectrograms.Therawfeaturesofspeech,waveforms,laterproducedexcellentlarger-scaleresults. Manyaspectsofspeechrecognitionweretakenoverbyadeeplearningmethodcalledlongshort-termmemory(LSTM),arecurrentneuralnetworkpublishedbyHochreiterandSchmidhuberin1997.LSTMRNNsavoidthevanishinggradientproblemandcanlearn"VeryDeepLearning"tasksthatrequirememoriesofeventsthathappenedthousandsofdiscretetimestepsbefore,whichisimportantforspeech.In2003,LSTMstartedtobecomecompetitivewithtraditionalspeechrecognizersoncertaintasks.Lateritwascombinedwithconnectionisttemporalclassification(CTC)instacksofLSTMRNNs.In2015,Google'sspeechrecognitionreportedlyexperiencedadramaticperformancejumpof49%throughCTC-trainedLSTM,whichtheymadeavailablethroughGoogleVoiceSearch. In2006,publicationsbyGeoffHinton,RuslanSalakhutdinov,OsinderoandTeh showedhowamany-layeredfeedforwardneuralnetworkcouldbeeffectivelypre-trainedonelayeratatime,treatingeachlayerinturnasanunsupervisedrestrictedBoltzmannmachine,thenfine-tuningitusingsupervisedbackpropagation.Thepapersreferredtolearningfordeepbeliefnets. Deeplearningispartofstate-of-the-artsystemsinvariousdisciplines,particularlycomputervisionandautomaticspeechrecognition(ASR).ResultsoncommonlyusedevaluationsetssuchasTIMIT(ASR)andMNIST(imageclassification),aswellasarangeoflarge-vocabularyspeechrecognitiontaskshavesteadilyimproved.Convolutionalneuralnetworks(CNNs)weresupersededforASRbyCTCforLSTM.butaremoresuccessfulincomputervision. Theimpactofdeeplearninginindustrybeganintheearly2000s,whenCNNsalreadyprocessedanestimated10%to20%ofallthecheckswrittenintheUS,accordingtoYannLeCun.Industrialapplicationsofdeeplearningtolarge-scalespeechrecognitionstartedaround2010. The2009NIPSWorkshoponDeepLearningforSpeechRecognitionwasmotivatedbythelimitationsofdeepgenerativemodelsofspeech,andthepossibilitythatgivenmorecapablehardwareandlarge-scaledatasetsthatdeepneuralnets(DNN)mightbecomepractical.Itwasbelievedthatpre-trainingDNNsusinggenerativemodelsofdeepbeliefnets(DBN)wouldovercomethemaindifficultiesofneuralnets.However,itwasdiscoveredthatreplacingpre-trainingwithlargeamountsoftrainingdataforstraightforwardbackpropagationwhenusingDNNswithlarge,context-dependentoutputlayersproducederrorratesdramaticallylowerthanthen-state-of-the-artGaussianmixturemodel(GMM)/HiddenMarkovModel(HMM)andalsothanmore-advancedgenerativemodel-basedsystems.Thenatureoftherecognitionerrorsproducedbythetwotypesofsystemswascharacteristicallydifferent,offeringtechnicalinsightsintohowtointegratedeeplearningintotheexistinghighlyefficient,run-timespeechdecodingsystemdeployedbyallmajorspeechrecognitionsystems.Analysisaround2009–2010,contrastingtheGMM(andothergenerativespeechmodels)vs.DNNmodels,stimulatedearlyindustrialinvestmentindeeplearningforspeechrecognition,eventuallyleadingtopervasiveanddominantuseinthatindustry.Thatanalysiswasdonewithcomparableperformance(lessthan1.5%inerrorrate)betweendiscriminativeDNNsandgenerativemodels. In2010,researchersextendeddeeplearningfromTIMITtolargevocabularyspeechrecognition,byadoptinglargeoutputlayersoftheDNNbasedoncontext-dependentHMMstatesconstructedbydecisiontrees. Advancesinhardwarehavedrivenrenewedinterestindeeplearning.In2009,Nvidiawasinvolvedinwhatwascalledthe“bigbang”ofdeeplearning,“asdeep-learningneuralnetworksweretrainedwithNvidiagraphicsprocessingunits(GPUs).”Thatyear,GoogleBrainusedNvidiaGPUstocreatecapableDNNs.Whilethere,AndrewNgdeterminedthatGPUscouldincreasethespeedofdeep-learningsystemsbyabout100times.Inparticular,GPUsarewell-suitedforthematrix/vectorcomputationsinvolvedinmachinelearning.GPUsspeeduptrainingalgorithmsbyordersofmagnitude,reducingrunningtimesfromweekstodays.Further,specializedhardwareandalgorithmoptimizationscanbeusedforefficientprocessingofdeeplearningmodels. Deeplearningrevolution Howdeeplearningisasubsetofmachinelearningandhowmachinelearningisasubsetofartificialintelligence(AI). In2012,ateamledbyGeorgeE.Dahlwonthe"MerckMolecularActivityChallenge"usingmulti-taskdeepneuralnetworkstopredictthebiomoleculartargetofonedrug.In2014,Hochreiter'sgroupuseddeeplearningtodetectoff-targetandtoxiceffectsofenvironmentalchemicalsinnutrients,householdproductsanddrugsandwonthe"Tox21DataChallenge"ofNIH,FDAandNCATS. Significantadditionalimpactsinimageorobjectrecognitionwerefeltfrom2011to2012.AlthoughCNNstrainedbybackpropagationhadbeenaroundfordecades,andGPUimplementationsofNNsforyears,includingCNNs,fastimplementationsofCNNswithmax-poolingonGPUsinthestyleofCiresanandcolleagueswereneededtoprogressoncomputervision.In2011,thisapproachachievedforthefirsttimesuperhumanperformanceinavisualpatternrecognitioncontest.Alsoin2011,itwontheICDARChinesehandwritingcontest,andinMay2012,itwontheISBIimagesegmentationcontest.Until2011,CNNsdidnotplayamajorroleatcomputervisionconferences,butinJune2012,apaperbyCiresanetal.attheleadingconferenceCVPRshowedhowmax-poolingCNNsonGPUcandramaticallyimprovemanyvisionbenchmarkrecords.InOctober2012,asimilarsystembyKrizhevskyetal.wonthelarge-scaleImageNetcompetitionbyasignificantmarginovershallowmachinelearningmethods.InNovember2012,Ciresanetal.'ssystemalsowontheICPRcontestonanalysisoflargemedicalimagesforcancerdetection,andinthefollowingyearalsotheMICCAIGrandChallengeonthesametopic.In2013and2014,theerrorrateontheImageNettaskusingdeeplearningwasfurtherreduced,followingasimilartrendinlarge-scalespeechrecognition.TheWolframImageIdentificationprojectpublicizedtheseimprovements. Imageclassificationwasthenextendedtothemorechallengingtaskofgeneratingdescriptions(captions)forimages,oftenasacombinationofCNNsandLSTMs. SomeresearchersstatethattheOctober2012ImageNetvictoryanchoredthestartofa"deeplearningrevolution"thathastransformedtheAIindustry. InMarch2019,YoshuaBengio,GeoffreyHintonandYannLeCunwereawardedtheTuringAwardforconceptualandengineeringbreakthroughsthathavemadedeepneuralnetworksacriticalcomponentofcomputing. Neuralnetworks Artificialneuralnetworks Mainarticles:Artificialneuralnetwork Artificialneuralnetworks(ANNs)orconnectionistsystemsarecomputingsystemsinspiredbythebiologicalneuralnetworksthatconstituteanimalbrains.Suchsystemslearn(progressivelyimprovetheirability)todotasksbyconsideringexamples,generallywithouttask-specificprogramming.Forexample,inimagerecognition,theymightlearntoidentifyimagesthatcontaincatsbyanalyzingexampleimagesthathavebeenmanuallylabeledas"cat"or"nocat"andusingtheanalyticresultstoidentifycatsinotherimages.Theyhavefoundmostuseinapplicationsdifficulttoexpresswithatraditionalcomputeralgorithmusingrule-basedprogramming. AnANNisbasedonacollectionofconnectedunitscalledartificialneurons,(analogoustobiologicalneuronsinabiologicalbrain).Eachconnection(synapse)betweenneuronscantransmitasignaltoanotherneuron.Thereceiving(postsynaptic)neuroncanprocessthesignal(s)andthensignaldownstreamneuronsconnectedtoit.Neuronsmayhavestate,generallyrepresentedbyrealnumbers,typicallybetween0and1.Neuronsandsynapsesmayalsohaveaweightthatvariesaslearningproceeds,whichcanincreaseordecreasethestrengthofthesignalthatitsendsdownstream. Typically,neuronsareorganizedinlayers.Differentlayersmayperformdifferentkindsoftransformationsontheirinputs.Signalstravelfromthefirst(input),tothelast(output)layer,possiblyaftertraversingthelayersmultipletimes. Theoriginalgoaloftheneuralnetworkapproachwastosolveproblemsinthesamewaythatahumanbrainwould.Overtime,attentionfocusedonmatchingspecificmentalabilities,leadingtodeviationsfrombiologysuchasbackpropagation,orpassinginformationinthereversedirectionandadjustingthenetworktoreflectthatinformation. Neuralnetworkshavebeenusedonavarietyoftasks,includingcomputervision,speechrecognition,machinetranslation,socialnetworkfiltering,playingboardandvideogamesandmedicaldiagnosis. Asof2017,neuralnetworkstypicallyhaveafewthousandtoafewmillionunitsandmillionsofconnections.Despitethisnumberbeingseveralorderofmagnitudelessthanthenumberofneuronsonahumanbrain,thesenetworkscanperformmanytasksatalevelbeyondthatofhumans(e.g.,recognizingfaces,playing"Go"). Deepneuralnetworks Adeepneuralnetwork(DNN)isanartificialneuralnetwork(ANN)withmultiplelayersbetweentheinputandoutputlayers.TheDNNfindsthecorrectmathematicalmanipulationtoturntheinputintotheoutput,whetheritbealinearrelationshiporanon-linearrelationship.Thenetworkmovesthroughthelayerscalculatingtheprobabilityofeachoutput.Forexample,aDNNthatistrainedtorecognizedogbreedswillgooverthegivenimageandcalculatetheprobabilitythatthedogintheimageisacertainbreed.Theusercanreviewtheresultsandselectwhichprobabilitiesthenetworkshoulddisplay(aboveacertainthreshold,etc.)andreturntheproposedlabel.Eachmathematicalmanipulationassuchisconsideredalayer,andcomplexDNNhavemanylayers,hencethename"deep"networks. DNNscanmodelcomplexnon-linearrelationships.DNNarchitecturesgeneratecompositionalmodelswheretheobjectisexpressedasalayeredcompositionofprimitives.Theextralayersenablecompositionoffeaturesfromlowerlayers,potentiallymodelingcomplexdatawithfewerunitsthanasimilarlyperformingshallownetwork. Deeparchitecturesincludemanyvariantsofafewbasicapproaches.Eacharchitecturehasfoundsuccessinspecificdomains.Itisnotalwayspossibletocomparetheperformanceofmultiplearchitectures,unlesstheyhavebeenevaluatedonthesamedatasets. DNNsaretypicallyfeedforwardnetworksinwhichdataflowsfromtheinputlayertotheoutputlayerwithoutloopingback.Atfirst,theDNNcreatesamapofvirtualneuronsandassignsrandomnumericalvalues,or"weights",toconnectionsbetweenthem.Theweightsandinputsaremultipliedandreturnanoutputbetween0and1.Ifthenetworkdidnotaccuratelyrecognizeaparticularpattern,analgorithmwouldadjusttheweights.Thatwaythealgorithmcanmakecertainparametersmoreinfluential,untilitdeterminesthecorrectmathematicalmanipulationtofullyprocessthedata. Recurrentneuralnetworks(RNNs),inwhichdatacanflowinanydirection,areusedforapplicationssuchaslanguagemodeling.Longshort-termmemoryisparticularlyeffectiveforthisuse. Convolutionaldeepneuralnetworks(CNNs)areusedincomputervision.CNNsalsohavebeenappliedtoacousticmodelingforautomaticspeechrecognition(ASR). Challenges AswithANNs,manyissuescanarisewithnaivelytrainedDNNs.Twocommonissuesareoverfittingandcomputationtime. DNNsarepronetooverfittingbecauseoftheaddedlayersofabstraction,whichallowthemtomodelraredependenciesinthetrainingdata.RegularizationmethodssuchasIvakhnenko'sunitpruningorweightdecay(-regularization)orsparsity(-regularization)canbeappliedduringtrainingtocombatoverfitting.Alternativelydropoutregularizationrandomlyomitsunitsfromthehiddenlayersduringtraining.Thishelpstoexcluderaredependencies.Finally,datacanbeaugmentedviamethodssuchascroppingandrotatingsuchthatsmallertrainingsetscanbeincreasedinsizetoreducethechancesofoverfitting. DNNsmustconsidermanytrainingparameters,suchasthesize(numberoflayersandnumberofunitsperlayer),thelearningrate,andinitialweights.Sweepingthroughtheparameterspaceforoptimalparametersmaynotbefeasibleduetothecostintimeandcomputationalresources.Varioustricks,suchasbatching(computingthegradientonseveraltrainingexamplesatonceratherthanindividualexamples)speedupcomputation.Largeprocessingcapabilitiesofmany-corearchitectures(suchasGPUsortheIntelXeonPhi)haveproducedsignificantspeedupsintraining,becauseofthesuitabilityofsuchprocessingarchitecturesforthematrixandvectorcomputations. Alternatively,engineersmaylookforothertypesofneuralnetworkswithmorestraightforwardandconvergenttrainingalgorithms.CMAC(cerebellarmodelarticulationcontroller)isonesuchkindofneuralnetwork.Itdoesn'trequirelearningratesorrandomizedinitialweightsforCMAC.Thetrainingprocesscanbeguaranteedtoconvergeinonestepwithanewbatchofdata,andthecomputationalcomplexityofthetrainingalgorithmislinearwithrespecttothenumberofneuronsinvolved. Applications Automaticspeechrecognition Mainarticles:Speechrecognition Large-scaleautomaticspeechrecognitionisthefirstandmostconvincingsuccessfulcaseofdeeplearning.LSTMRNNscanlearn"VeryDeepLearning"tasksthatinvolvemulti-secondintervalscontainingspeecheventsseparatedbythousandsofdiscretetimesteps,whereonetimestepcorrespondstoabout10ms.LSTMwithforgetgatesiscompetitivewithtraditionalspeechrecognizersoncertaintasks. Theinitialsuccessinspeechrecognitionwasbasedonsmall-scalerecognitiontasksbasedonTIMIT.Thedatasetcontains630speakersfromeightmajordialectsofAmericanEnglish,whereeachspeakerreads10sentences.Itssmallsizeletsmanyconfigurationsbetried.Moreimportantly,theTIMITtaskconcernsphone-sequencerecognition,which,unlikeword-sequencerecognition,allowsweakphonebigramlanguagemodels.Thisletsthestrengthoftheacousticmodelingaspectsofspeechrecognitionbemoreeasilyanalyzed.Theerrorrateslistedbelow,includingtheseearlyresultsandmeasuredaspercentphoneerrorrates(PER),havebeensummarizedsince1991. Method Percentphoneerrorrate(PER)(%) RandomlyInitializedRNN 26.1 BayesianTriphoneGMM-HMM 25.6 HiddenTrajectory(Generative)Model 24.8 MonophoneRandomlyInitializedDNN 23.4 MonophoneDBN-DNN 22.4 TriphoneGMM-HMMwithBMMITraining 21.7 MonophoneDBN-DNNonfbank 20.7 ConvolutionalDNN 20.0 ConvolutionalDNNw.HeterogeneousPooling 18.7 EnsembleDNN/CNN/RNN 18.3 BidirectionalLSTM 17.9 HierarchicalConvolutionalDeepMaxoutNetwork 16.5 ThedebutofDNNsforspeakerrecognitioninthelate1990sandspeechrecognitionaround2009-2011andofLSTMaround2003–2007,acceleratedprogressineightmajorareas: Scale-up/outandacceleratedDNNtraininganddecoding Sequencediscriminativetraining Featureprocessingbydeepmodelswithsolidunderstandingoftheunderlyingmechanisms AdaptationofDNNsandrelateddeepmodels Multi-taskandtransferlearningbyDNNsandrelateddeepmodels CNNsandhowtodesignthemtobestexploitdomainknowledgeofspeech RNNanditsrichLSTMvariants Othertypesofdeepmodelsincludingtensor-basedmodelsandintegrateddeepgenerative/discriminativemodels. Allmajorcommercialspeechrecognitionsystems(e.g.,MicrosoftCortana,Xbox,SkypeTranslator,AmazonAlexa,GoogleNow,AppleSiri,BaiduandiFlyTekvoicesearch,andarangeofNuancespeechproducts,etc.)arebasedondeeplearning. Imagerecognition Mainarticles:Computervision AcommonevaluationsetforimageclassificationistheMNISTdatabasedataset.MNISTiscomposedofhandwrittendigitsandincludes60,000trainingexamplesand10,000testexamples.AswithTIMIT,itssmallsizeletsuserstestmultipleconfigurations.Acomprehensivelistofresultsonthissetisavailable. Deeplearning-basedimagerecognitionhasbecome"superhuman",producingmoreaccurateresultsthanhumancontestants.Thisfirstoccurredin2011. Deeplearning-trainedvehiclesnowinterpret360°cameraviews.AnotherexampleisFacialDysmorphologyNovelAnalysis(FDNA)usedtoanalyzecasesofhumanmalformationconnectedtoalargedatabaseofgeneticsyndromes. Visualartprocessing Closelyrelatedtotheprogressthathasbeenmadeinimagerecognitionistheincreasingapplicationofdeeplearningtechniquestovariousvisualarttasks.DNNshaveproventhemselvescapable,forexample,ofa)identifyingthestyleperiodofagivenpainting,b)NeuralStyleTransfer-capturingthestyleofagivenartworkandapplyingitinavisuallypleasingmannertoanarbitraryphotographorvideo,andc)generatingstrikingimagerybasedonrandomvisualinputfields. Naturallanguageprocessing Mainarticles:Naturallanguageprocessing Neuralnetworkshavebeenusedforimplementinglanguagemodelssincetheearly2000s.LSTMhelpedtoimprovemachinetranslationandlanguagemodeling. Otherkeytechniquesinthisfieldarenegativesamplingandwordembedding.Wordembedding,suchasword2vec,canbethoughtofasarepresentationallayerinadeeplearningarchitecturethattransformsanatomicwordintoapositionalrepresentationofthewordrelativetootherwordsinthedataset;thepositionisrepresentedasapointinavectorspace.UsingwordembeddingasanRNNinputlayerallowsthenetworktoparsesentencesandphrasesusinganeffectivecompositionalvectorgrammar.Acompositionalvectorgrammarcanbethoughtofasprobabilisticcontextfreegrammar(PCFG)implementedbyanRNN.Recursiveauto-encodersbuiltatopwordembeddingscanassesssentencesimilarityanddetectparaphrasing.Deepneuralarchitecturesprovidethebestresultsforconstituencyparsing,sentimentanalysis,informationretrieval,spokenlanguageunderstanding,machinetranslation,contextualentitylinking,writingstylerecognition,Textclassificationandothers. Recentdevelopmentsgeneralizewordembeddingtosentenceembedding. GoogleTranslate(GT)usesalargeend-to-endlongshort-termmemorynetwork.GoogleNeuralMachineTranslation(GNMT)usesanexample-basedmachinetranslationmethodinwhichthesystem"learnsfrommillionsofexamples."Ittranslates"wholesentencesatatime,ratherthanpieces.GoogleTranslatesupportsoveronehundredlanguages.Thenetworkencodesthe"semanticsofthesentenceratherthansimplymemorizingphrase-to-phrasetranslations".GTusesEnglishasanintermediatebetweenmostlanguagepairs. Drugdiscoveryandtoxicology Formoreinformation,seeDrugdiscoveryandToxicology. Alargepercentageofcandidatedrugsfailtowinregulatoryapproval.Thesefailuresarecausedbyinsufficientefficacy(on-targeteffect),undesiredinteractions(off-targeteffects),orunanticipatedtoxiceffects.Researchhasexploreduseofdeeplearningtopredictthebiomoleculartargets,off-targets,andtoxiceffectsofenvironmentalchemicalsinnutrients,householdproductsanddrugs. AtomNetisadeeplearningsystemforstructure-basedrationaldrugdesign.AtomNetwasusedtopredictnovelcandidatebiomoleculesfordiseasetargetssuchastheEbolavirusandmultiplesclerosis. In2019generativeneuralnetworkswereusedtoproducemoleculesthatwerevalidatedexperimentallyallthewayintomice. Customerrelationshipmanagement Mainarticles:Customerrelationshipmanagement Deepreinforcementlearninghasbeenusedtoapproximatethevalueofpossibledirectmarketingactions,definedintermsofRFMvariables.Theestimatedvaluefunctionwasshowntohaveanaturalinterpretationascustomerlifetimevalue. Recommendationsystems Mainarticles:Recommendersystem Recommendationsystemshaveuseddeeplearningtoextractmeaningfulfeaturesforalatentfactormodelforcontent-basedmusicandjournalrecommendations.Multi-viewdeeplearninghasbeenappliedforlearninguserpreferencesfrommultipledomains.Themodelusesahybridcollaborativeandcontent-basedapproachandenhancesrecommendationsinmultipletasks. Bioinformatics Mainarticles:Bioinformatics AnautoencoderANNwasusedinbioinformatics,topredictgeneontologyannotationsandgene-functionrelationships. Inmedicalinformatics,deeplearningwasusedtopredictsleepqualitybasedondatafromwearablesandpredictionsofhealthcomplicationsfromelectronichealthrecorddata. MedicalImageAnalysis Deeplearninghasbeenshowntoproducecompetitiveresultsinmedicalapplicationsuchascancercellclassification,lesiondetection,organsegmentationandimageenhancement Mobileadvertising Findingtheappropriatemobileaudienceformobileadvertisingisalwayschallenging,sincemanydatapointsmustbeconsideredandanalyzedbeforeatargetsegmentcanbecreatedandusedinadservingbyanyadserver.Deeplearninghasbeenusedtointerpretlarge,many-dimensionedadvertisingdatasets.Manydatapointsarecollectedduringtherequest/serve/clickinternetadvertisingcycle.Thisinformationcanformthebasisofmachinelearningtoimproveadselection. Imagerestoration Deeplearninghasbeensuccessfullyappliedtoinverseproblemssuchasdenoising,super-resolution,inpainting,andfilmcolorization.Theseapplicationsincludelearningmethodssuchas"ShrinkageFieldsforEffectiveImageRestoration"whichtrainsonanimagedataset,andDeepImagePrior,whichtrainsontheimagethatneedsrestoration. Financialfrauddetection Deeplearningisbeingsuccessfullyappliedtofinancialfrauddetectionandanti-moneylaundering."Deepanti-moneylaunderingdetectionsystemcanspotandrecognizerelationshipsandsimilaritiesbetweendataand,furtherdowntheroad,learntodetectanomaliesorclassifyandpredictspecificevents".Thesolutionleveragesbothsupervisedlearningtechniques,suchastheclassificationofsuspicioustransactions,andunsupervisedlearning,e.g.anomalydetection. Military TheUnitedStatesDepartmentofDefenseapplieddeeplearningtotrainrobotsinnewtasksthroughobservation. Relationtohumancognitiveandbraindevelopment Deeplearningiscloselyrelatedtoaclassoftheoriesofbraindevelopment(specifically,neocorticaldevelopment)proposedbycognitiveneuroscientistsintheearly1990s.Thesedevelopmentaltheorieswereinstantiatedincomputationalmodels,makingthempredecessorsofdeeplearningsystems.Thesedevelopmentalmodelssharethepropertythatvariousproposedlearningdynamicsinthebrain(e.g.,awaveofnervegrowthfactor)supporttheself-organizationsomewhatanalogoustotheneuralnetworksutilizedindeeplearningmodels.Liketheneocortex,neuralnetworksemployahierarchyoflayeredfiltersinwhicheachlayerconsidersinformationfromapriorlayer(ortheoperatingenvironment),andthenpassesitsoutput(andpossiblytheoriginalinput),tootherlayers.Thisprocessyieldsaself-organizingstackoftransducers,well-tunedtotheiroperatingenvironment.A1995descriptionstated,"...theinfant'sbrainseemstoorganizeitselfundertheinfluenceofwavesofso-calledtrophic-factors...differentregionsofthebrainbecomeconnectedsequentially,withonelayeroftissuematuringbeforeanotherandsoonuntilthewholebrainismature." Avarietyofapproacheshavebeenusedtoinvestigatetheplausibilityofdeeplearningmodelsfromaneurobiologicalperspective.Ontheonehand,severalvariantsofthebackpropagationalgorithmhavebeenproposedinordertoincreaseitsprocessingrealism.Otherresearchershavearguedthatunsupervisedformsofdeeplearning,suchasthosebasedonhierarchicalgenerativemodelsanddeepbeliefnetworks,maybeclosertobiologicalreality.Inthisrespect,generativeneuralnetworkmodelshavebeenrelatedtoneurobiologicalevidenceaboutsampling-basedprocessinginthecerebralcortex. Althoughasystematiccomparisonbetweenthehumanbrainorganizationandtheneuronalencodingindeepnetworkshasnotyetbeenestablished,severalanalogieshavebeenreported.Forexample,thecomputationsperformedbydeeplearningunitscouldbesimilartothoseofactualneuronsandneuralpopulations.Similarly,therepresentationsdevelopedbydeeplearningmodelsaresimilartothosemeasuredintheprimatevisualsystembothatthesingle-unitandatthepopulationlevels. Commercialactivity Facebook'sAIlabperformstaskssuchasautomaticallytagginguploadedpictureswiththenamesofthepeopleinthem. Google'sDeepMindTechnologiesdevelopedasystemcapableoflearninghowtoplayAtarivideogamesusingonlypixelsasdatainput.In2015theydemonstratedtheirAlphaGosystem,whichlearnedthegameofGowellenoughtobeataprofessionalGoplayer.GoogleTranslateusesaneuralnetworktotranslatebetweenmorethan100languages. In2015,Blippardemonstratedamobileaugmentedrealityapplicationthatusesdeeplearningtorecognizeobjectsinrealtime. In2017,Covariant.aiwaslaunched,whichfocusesonintegratingdeeplearningintofactories. Asof2008,researchersatTheUniversityofTexasatAustin(UT)developedamachinelearningframeworkcalledTraininganAgentManuallyviaEvaluativeReinforcement,orTAMER,whichproposednewmethodsforrobotsorcomputerprogramstolearnhowtoperformtasksbyinteractingwithahumaninstructor.FirstdevelopedasTAMER,anewalgorithmcalledDeepTAMERwaslaterintroducedin2018duringacollaborationbetweenU.S.ArmyResearchLaboratory(ARL)andUTresearchers.DeepTAMERuseddeeplearningtoprovidearobottheabilitytolearnnewtasksthroughobservation.UsingDeepTAMER,arobotlearnedataskwithahumantrainer,watchingvideostreamsorobservingahumanperformataskin-person.Therobotlaterpracticedthetaskwiththehelpofsomecoachingfromthetrainer,whoprovidedfeedbacksuchas“goodjob”and“badjob.” Criticismandcomment Deeplearninghasattractedbothcriticismandcomment,insomecasesfromoutsidethefieldofcomputerscience. Theory Seealso:ExplainableAI Amaincriticismconcernsthelackoftheorysurroundingsomemethods.Learninginthemostcommondeeparchitecturesisimplementedusingwell-understoodgradientdescent.However,thetheorysurroundingotheralgorithms,suchascontrastivedivergenceislessclear.(e.g.,Doesitconverge?Ifso,howfast?Whatisitapproximating?)Deeplearningmethodsareoftenlookedatasablackbox,withmostconfirmationsdoneempirically,ratherthantheoretically. OtherspointoutthatdeeplearningshouldbelookedatasasteptowardsrealizingstrongAI,notasanall-encompassingsolution.Despitethepowerofdeeplearningmethods,theystilllackmuchofthefunctionalityneededforrealizingthisgoalentirely.ResearchpsychologistGaryMarcusnoted:"Realistically,deeplearningisonlypartofthelargerchallengeofbuildingintelligentmachines.Suchtechniqueslackwaysofrepresentingcausalrelationships(...)havenoobviouswaysofperforminglogicalinferences,andtheyarealsostillalongwayfromintegratingabstractknowledge,suchasinformationaboutwhatobjectsare,whattheyarefor,andhowtheyaretypicallyused.ThemostpowerfulA.I.systems,likeWatson(...)usetechniqueslikedeeplearningasjustoneelementinaverycomplicatedensembleoftechniques,rangingfromthestatisticaltechniqueofBayesianinferencetodeductivereasoning."Asanalternativetothisemphasisonthelimitsofdeeplearning,oneauthorspeculatedthatitmightbepossibletotrainamachinevisionstacktoperformthesophisticatedtaskofdiscriminatingbetween"oldmaster"andamateurfiguredrawings,andhypothesizedthatsuchasensitivitymightrepresenttherudimentsofanon-trivialmachineempathy.Thissameauthorproposedthatthiswouldbeinlinewithanthropology,whichidentifiesaconcernwithaestheticsasakeyelementofbehavioralmodernity. Infurtherreferencetotheideathatartisticsensitivitymightinherewithinrelativelylowlevelsofthecognitivehierarchy,apublishedseriesofgraphicrepresentationsoftheinternalstatesofdeep(20-30layers)neuralnetworksattemptingtodiscernwithinessentiallyrandomdatatheimagesonwhichtheyweretraineddemonstrateavisualappeal:theoriginalresearchnoticereceivedwellover1,000comments,andwasthesubjectofwhatwasforatimethemostfrequentlyaccessedarticleonTheGuardian'swebsite. Errors Somedeeplearningarchitecturesdisplayproblematicbehaviors,suchasconfidentlyclassifyingunrecognizableimagesasbelongingtoafamiliarcategoryofordinaryimagesandmisclassifyingminusculeperturbationsofcorrectlyclassifiedimages.Goertzelhypothesizedthatthesebehaviorsareduetolimitationsintheirinternalrepresentationsandthattheselimitationswouldinhibitintegrationintoheterogeneousmulti-componentartificialgeneralintelligence(AGI)architectures.Theseissuesmaypossiblybeaddressedbydeeplearningarchitecturesthatinternallyformstateshomologoustoimage-grammardecompositionsofobservedentitiesandevents.Learningagrammar(visualorlinguistic)fromtrainingdatawouldbeequivalenttorestrictingthesystemtocommonsensereasoningthatoperatesonconceptsintermsofgrammaticalproductionrulesandisabasicgoalofbothhumanlanguageacquisitionandartificialintelligence(AI). Cyberthreat Asdeeplearningmovesfromthelabintotheworld,researchandexperienceshowsthatartificialneuralnetworksarevulnerabletohacksanddeception.Byidentifyingpatternsthatthesesystemsusetofunction,attackerscanmodifyinputstoANNsinsuchawaythattheANNfindsamatchthathumanobserverswouldnotrecognize.Forexample,anattackercanmakesubtlechangestoanimagesuchthattheANNfindsamatcheventhoughtheimagelookstoahumannothinglikethesearchtarget.Suchamanipulationistermedan“adversarialattack.”In2016researchersusedoneANNtodoctorimagesintrialanderrorfashion,identifyanother'sfocalpointsandtherebygenerateimagesthatdeceivedit.Themodifiedimageslookednodifferenttohumaneyes.Anothergroupshowedthatprintoutsofdoctoredimagesthenphotographedsuccessfullytrickedanimageclassificationsystem.Onedefenseisreverseimagesearch,inwhichapossiblefakeimageissubmittedtoasitesuchasTinEyethatcanthenfindotherinstancesofit.Arefinementistosearchusingonlypartsoftheimage,toidentifyimagesfromwhichthatpiecemayhavebeentaken. Anothergroupshowedthatcertainpsychedelicspectaclescouldfoolafacialrecognitionsystemintothinkingordinarypeoplewerecelebrities,potentiallyallowingonepersontoimpersonateanother.In2017researchersaddedstickerstostopsignsandcausedanANNtomisclassifythem. ANNscanhoweverbefurthertrainedtodetectattemptsatdeception,potentiallyleadingattackersanddefendersintoanarmsracesimilartothekindthatalreadydefinesthemalwaredefenseindustry.ANNshavebeentrainedtodefeatANN-basedanti-malwaresoftwarebyrepeatedlyattackingadefensewithmalwarethatwascontinuallyalteredbyageneticalgorithmuntilittrickedtheanti-malwarewhileretainingitsabilitytodamagethetarget. AnothergroupdemonstratedthatcertainsoundscouldmaketheGoogleNowvoicecommandsystemopenaparticularwebaddressthatwoulddownloadmalware. In“datapoisoning,”falsedataiscontinuallysmuggledintoamachinelearningsystem'strainingsettopreventitfromachievingmastery. Relianceonhumanmicrowork MostDeepLearningsystemsrelyontrainingandverificationdatathatisgeneratedand/orannotatedbyhumans.Ithasbeenarguedinmediaphilosophythatnotonlylow-paidclickwork(e.g.onAmazonMechanicalTurk)isregularlydeployedforthispurpose,butalsoimplicitformsofhumanmicroworkthatareoftennotrecognizedassuch.ThephilosopherRainerMühlhoffdistinguishesfivetypesof"machiniccapture"ofhumanmicroworktogeneratetrainingdata:(1)gamification(theembeddingofannotationorcomputationtasksintheflowofagame),(2)"trappingandtracking"(e.g.CAPTCHAsforimagerecognitionorclick-trackingonGooglesearchresultspages),(3)exploitationofsocialmotivations(e.g.taggingfacesonFacebooktoobtainlabeledfacialimages),(4)informationmining(e.g.byleveragingquantified-selfdevicessuchasactivitytrackers)and(5)clickwork.Mühlhoffarguesthatinmostcommercialend-userapplicationsofDeepLearningsuchasFacebook'sfacerecognitionsystem,theneedfortrainingdatadoesnotstoponceanANNistrained.Rather,thereisacontinueddemandforhuman-generatedverificationdatatoconstantlycalibrateandupdatetheANN.ForthispurposeFacebookintroducedthefeaturethatonceauserisautomaticallyrecognizedinanimage,theyreceiveanotification.Theycanchoosewhetherofnottheyliketobepubliclylabeledontheimage,ortellFacebookthatitisnottheminthepicture.Thisuserinterfaceisamechanismtogenerate"aconstantstreamofverificationdata"tofurthertrainthenetworkinreal-time.AsMühlhoffargues,involvementofhumanuserstogeneratetrainingandverificationdataissotypicalformostcommercialend-userapplicationsofDeepLearningthatsuchsystemsmaybereferredtoas"human-aidedartificialintelligence". References ThispageusescontentthatthoughoriginallyimportedfromtheWikipediaarticleDeeplearningmighthavebeenveryheavilymodified,perhapseventothepointofdisagreeingcompletelywiththeoriginalwikipediaarticle.Thelistofauthorscanbeseeninthepagehistory.ThetextofWikipediaisavailableundertheCreativeCommonsLicence. Categories: Neuralnets CommunitycontentisavailableunderCC-BY-SAunlessotherwisenoted. Advertisement FanFeed UniversalConquestWiki Let'sGoLuna!Wiki Club57Wiki FollowonIG TikTok JoinFanLab
延伸文章資訊
- 1A Beginner's Guide to Neural Networks and Deep Learning
An introduction to deep artificial neural networks and deep learning. ... A Beginner's Guide to I...
- 2Deep learning - Wikipedia
Deep learning is part of a broader family of machine learning methods based on artificial neural ...
- 3Machine Learning - MediaWiki
Welcome to the homepage of the Wikimedia Foundation's Machine Learning team. Machine Learning. Ma...
- 4AI Wiki for Data Scientists - H2O.ai
A repository of articles spanning multiple categories including artificial intelligence, machine ...
- 5Deep Learning Wiki - Explorium AI
These algorithms are the foundation of Deep Learning, but so are technologies such as image recog...