0
5
¥ 399.00购买
开通会员,立省39.90元, 立即开通
立即购买

你将收获

机器学习初学者迅速了解人工智能学习当中各经典模型及优化算法的理论基础

建立直观认识

达到加快加深课程学习的目的

适用人群

所有人

课程介绍

本课程讲解机器学习及人工智能学习当中所需概率和统计推断。课程为CSDN学院人工智能课程打造,系统全面而又深入浅出的讲解了学习当中需要的各种基础数学知识,公式推导与理解等内容。概率部分包括概率公理及推论、条件概率、贝叶斯公式、随机变量及其概率函数(CDF/pdf)、常用概率分布及其均值、方差;统计推断部分包括大数定律和中心极限定理、极大似然估计、贝叶斯估计。

课程讨论

机器学习回过头来看这视频

完了说好的下载会员也不给我666666666666666

其实,我对比了国外概率论的课程讲解。老师您讲的也不错。如果能讲再讲公式的时候讲具体数字带入进去讲,并算出结果就更好了。国外一般都是这样讲的。

刚刚那个是差评,csdn就看不出来????????????????????

不如自己看书,老子怀疑他自己会不,妈的,前面还好,到后面极大释然估计和最大后眼估计,纯粹就是他自己也不是很懂的样子,完全不知道他在说什么,极大释然估计看书翻的,反正可能就是只会代码的那种讲师吧,我猜讲师就是那种不关心数学原理,反正python给你封装好的那种

老师讲的很透彻,但我还是觉得最后几个估计比较难懂,还是自己问问助教吧

感觉用线段理解起来特别容易,以前及公式总是忘记P(AB)

特别清晰,回顾了丢了很久的知识,拉近了距离感。

讲的很清晰,极大似然和后验估计之前一直没理解透彻,现在理解了。

每个章节没有校测验呢?每个章节没有校测验呢?每个章节没有校测验呢?

同学笔记

  • weixin_44139192 2020-06-01 20:25:54

    来源:协方差和相关系数 查看详情

    协方差是cov = E[(x-Ex)(y-Ey)],是不同维数据的之间的差异趋势,其标准化为Cov/各自的标准差之积

     

    方差是Dx = E[(x-Ex)**2],是一维数据的偏差

  • weixin_44665362 2020-05-11 20:11:44

    来源:随机试验、样本空间和随机事件 查看详情

    def check_param(target_param, param, strict):

    """

    check the param ,whether it is in default list

    :param target_param: target param

    :param param: param's name , should be a string

    :param strict: if True/1,the param should not be null, else if false/0, the param can be null/none

    :return: process abort or not

    """

    if (strict and target_param not in PARAMS_CHECK[param]) or (

    target_param and target_param not in PARAMS_CHECK[param]):

    json_data = jsonify({"status": 400,

    "message": "param is not in {}: {}".format(str(PARAMS_CHECK[param]),

    target_param)})

    abort(json_data)

    else:

    logger.info("Process continue, because {} is valid".format(param))

     

     

     

     

     

    PARAMS_CHECK = {

    "action": ["predict", "train", "check_model_status", "init_scene"],

    "columns": ["LAST_UPDATE_DATE"],

    "input_mode": ["s3", "df"]

    }

     

     

     

     

    # -*- coding: utf-8 -*-

    """

    """

    from flask_restful import Api, request, abort

    from aipaas.logger_factory import logger

    from aipaas.soa.flask import SoaAuth

    import os

    import sys

    import pandas as pd

    from utils import check_columns, check_none

    from utils import check_param, check_model_status, S3Manager

    from utils import make_new_dir

    from utils import produce_unique_id

    from utils import add_new_scene

    from datetime import datetime

     

    sys.path.append(os.getcwd())

    from conf import ENV_TYPE, LOCAL_S3_FILE_PATH

    from multiprocessing.managers import BaseManager

    from flask import Flask, jsonify

    from feature_process import PrepareModelAll

     

    app = Flask(__name__)

    auth = SoaAuth(env_type=ENV_TYPE, skip_soa_auth=False)

    api = Api(app=app, catch_all_404s=True)

     

    BaseManager.register('PreparedModel', PrepareModelAll)

    manager = BaseManager()

    manager.start()

    model = manager.PreparedModel()


     

    @app.route('/erp_conversion_detect/train_predict', methods=("POST",)) # @auth.required

    def train_predict():

    params = request.json or request.args

    action = params.get("attributes").get("action")

    logger.info("action:{}".format(action))

    check_param(action, "action", 1)

     

    if action not in ["check_model_status", "init_scene"]:

    to_check_id = params.get("attributes").get("to_check_id")

    logger.info("updated_id:{}".format(to_check_id))

    check_none(to_check_id, "to_check_id")

     

    input_mode = params.get("input_data").get("input_mode") # s3 or df

    logger.info("Reading input data")

    input_df_data = params.get("input_data").get("input_df_data")

    check_param(input_mode, "input_mode", 1)

     

    if str(input_mode) == 's3':

    s3_url = params.get("input_data").get("s3_url")

    s3_ak = params.get("input_data").get("s3_ak")

    s3_sk = params.get("input_data").get("s3_sk")

    s3_bucket_name = params.get("input_data").get("s3_bucket_name")

    s3_file = params.get("input_data").get("s3_file")

    if not (s3_url and s3_ak and s3_sk and s3_bucket_name and s3_file):

    abort(jsonify({"status": 400,

    "message": "s3 info is not enough! please check it!"}))

    else:

    file_name = to_check_id + '.csv'

    local_path = LOCAL_S3_FILE_PATH + file_name

    sc = S3Manager(s3_bucket_name, s3_ak, s3_sk, s3_url)

    sc.move_files("download", s3_file, local_path)

    new_data = pd.read_csv(local_path, header=0)

    new_data = pd.read_json(

    input_df_data) if input_mode == 'df' else new_data

    columns = new_data.columns

    check_none(new_data, "input_data")

    check_columns(columns, "columns")

    new_data["LAST_UPDATE_DATE"] = pd.to_datetime(new_data["LAST_UPDATE_DATE"])

    logger.info(new_data.head())

    try:

    if action == "predict":

    percent_score = params.get("attributes").get("percent_score")

    logger.info("percent score:{}".format(percent_score))

    percent_score = int(percent_score) if percent_score else 100

    logger.info("start predict:")

    scene_id = params.get("attributes").get("scene_id")

    check_none(scene_id, "scene_id")

    logger.info("scene_id is ok:{}".format(scene_id))

    output_df = model.predict(new_data, to_check_id, percent_score, scene_id)

    logger.info("predict done...")

    output_df = [{str(a): b} for a, b in zip(new_data["LAST_UPDATE_DATE"],

    output_df)] if output_df else "model doesn't exist"

    return jsonify({"status": 200, "data": output_df,

    "message": "Done making prediction for model:{}".format(action)})

    elif action == "train":

    logger.info("start train:")

    scene_id = params.get("attributes").get("scene_id")

    check_none(scene_id, "scene_id")

    logger.info("scene_id is ok:{}".format(scene_id))

    output = model.train(new_data, to_check_id, scene_id)

    return jsonify({"status": 200, "data": output,

    "message": "Done making training for model:{}-{}".

    format(scene_id, to_check_id)})

    except Exception as e:

    return jsonify({"status": 400, "data": "",

    "message": "Error occurred when processing API request: {}".format(e)})

    elif action == "check_model_status":

    logger.info("start check model status:")

    scene_id = params.get("attributes").get("scene_id")

    check_none(scene_id, "scene_id")

    logger.info("scene_id is ok:{}".format(scene_id))

    to_check_id = params.get("attributes").get("to_check_id")

    logger.info("to_check_id:{}".format(to_check_id))

    check_none(to_check_id, "to_check_id")

    output = check_model_status(to_check_id, scene_id)

    logger.info("Done\n")

    return jsonify({"status": 200, "data": output,

    "message": "Done making {} for model:{}".format(action, to_check_id)})

    elif action == "init_scene":

    user_id = params.get("attributes").get("user_id")

    logger.info("start init scene for user:{}".format(user_id))

    user_id = user_id if user_id else "000"

    user_id = "_".join(

    [user_id, str(datetime.strftime(datetime.now(), "%Y-%m-%d %H:%M:%S.%f"))])

    new_scene = produce_unique_id()

    make_new_dir(new_scene)

    logger.info("make new file done:{}".format(new_scene))

    # write into config file and save to s3

    logger.info("add new scene id:{}:{}".format(user_id, new_scene))

    add_new_scene({user_id: new_scene})

    return jsonify({"status": 200, "data": new_scene,

    "message": "Your new unique id has been created! Please record it!"})


     

    if __name__ == '__main__':

    logger.info("start flask app")

    app.run()

     

  • hp13761912901 2020-04-01 17:56:05

    来源:可能性的度量 查看详情

    频数、频率?=随机事件可能性

    概率?=随机事件可能性

    大数定理

没有更多了