`
wang_peng1
  • 浏览: 3942466 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论
文章列表
我不喜欢在博客里回答问题,我只是希望博客干净一点,除非你做的效果和更好的方法可以跟帖 如果你要问我问题或者和我交流 请加我qq 3323297   谢谢 如果我能帮上 会帮你的,希望我们是朋友 也希望不要在跟帖了 谢谢,您也可以加入群 6884330,不过此群已满 请加入 75542406此群 

URI 转path

转自知乎Matisse package com.zhihu.matisse.internal.utils; import android.annotation.TargetApi; import android.content.ContentUris; import android.content.Context; import android.database.Cursor; import android.net.Uri; import android.os.Build; import android.os.Environment; import android.pro ...
首先下载Win32DiskImager-0.9.5-install、debian-8.4.0-amd64-DVD-1.iso 系统,iso不要放在u盘中,放在本地pC中 不然 用DiskImager,不成功。 其他的按照 https://jingyan.baidu.com/article/4b07be3cb16a4e48b280f361.html  
http://blog.csdn.net/loyachen/article/details/51113118 这个链接亲测好使  

gzip 抽取数据

def extract_data(filename, num_images): """Extract the images into a 4D tensor [image index, y, x, channels]. Values are rescaled from [0, 255] down to [-0.5, 0.5]. """ print('Extracting', filename) with gzip.open(filename) as bytestream: bytestrea ...
import argparse import os import sys from six.moves import urllib import tensorflow as tf DATA_URL = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult' TRAINING_FILE = 'adult.data' TRAINING_URL = '%s/%s' % (DATA_URL, TRAINING_FILE) EVAL_FILE = 'adult.test' EVAL_URL = '%s/ ...
import argparse import os import sys import tarfile from six.moves import urllib import tensorflow as tf DATA_URL = 'https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz' parser = argparse.ArgumentParser() parser.add_argument( '--data_dir', type=str, default='/tmp/cifar10_dat ...

TFRecordWriter

def _int64_feature(value): return tf.train.Feature(int64_list=tf.train.Int64List(value=[value])) def _bytes_feature(value): return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value])) def convert_to(dataset, name, directory): """Converts a dataset to TFRe ...
x, y = reader.ptb_producer(raw_data, batch_size, num_steps) with self.test_session() as session: coord = tf.train.Coordinator() tf.train.start_queue_runners(session, coord=coord) try: xval, yval = session.run([x, y]) print(xval) print(yval) ...
epoch_size = (batch_len - 1) // num_steps assertion = tf.assert_positive( epoch_size, message="epoch_size == 0, decrease batch_size or num_steps") with tf.control_dependencies([assertion]): epoch_size = tf.identity(epoch_size, name="epoch_size") ...
def _read_words(filename): with tf.gfile.GFile(filename, "r") as f: if Py3: return f.read().replace("\n", "<eos>").split() else: return f.read().decode("utf-8").replace("\n", "<eos>").split() def ...

tf.identity 使用

https://stackoverflow.com/questions/34877523/in-tensorflow-what-is-tf-identity-used-for After some stumbling I think I've noticed a single use case that fits all the examples I've seen. If there are other use cases, please elaborate with an example. Use case: Suppose you'd like to run an op ...
本文是全文复制 http://www.machinelearninguru.com/deep_learning/tensorflow/basics/tfrecord/tfrecord.html Introduction In the previous post we explained the benefits of saving a large dataset in a single HDF5 file. In this post we will learn how to convert our data into the Tensorflow standard format, ...
n [71]: a1 = tf.constant([2,2], name="a1") In [72]: a1 Out[72]: <tf.Tensor 'a1_5:0' shape=(2,) dtype=int32> # add a new dimension In [73]: a1_new = a1[tf.newaxis, :] In [74]: a1_new Out[74]: <tf.Tensor 'strided_slice_5:0' shape=(1, 2) dtype=int32> # add one more ...
RNN 的shape [B, T, ...]其中B batch size ,T每次输入的长度,如句子单词的长度,剩下的维度取决于数据。 如果说single batch 里面所有的句子长度不同,但是RNN要求必须一样,所以必须填充,因此需要padding 他们,一般填充0. 如果仅有几个句子长度是1000,平均长度是20,如果全部填充到1000,那浪费的很多。因此需要batch padding。 如果设定batches size 为32,那么主要保持这次batch一样就可以了,下次patch可以和上次不一样,这样,只有个别的1ooo需要填充,节约了空间 可以使用 tf.train ...
Global site tag (gtag.js) - Google Analytics