1. <legend id='PQ2OG'><style id='PQ2OG'><dir id='PQ2OG'><q id='PQ2OG'></q></dir></style></legend>
    2. <i id='PQ2OG'><tr id='PQ2OG'><dt id='PQ2OG'><q id='PQ2OG'><span id='PQ2OG'><b id='PQ2OG'><form id='PQ2OG'><ins id='PQ2OG'></ins><ul id='PQ2OG'></ul><sub id='PQ2OG'></sub></form><legend id='PQ2OG'></legend><bdo id='PQ2OG'><pre id='PQ2OG'><center id='PQ2OG'></center></pre></bdo></b><th id='PQ2OG'></th></span></q></dt></tr></i><div id='PQ2OG'><tfoot id='PQ2OG'></tfoot><dl id='PQ2OG'><fieldset id='PQ2OG'></fieldset></dl></div>

      1. <small id='PQ2OG'></small><noframes id='PQ2OG'>

      2. <tfoot id='PQ2OG'></tfoot>
          <bdo id='PQ2OG'></bdo><ul id='PQ2OG'></ul>

        输出补丁而不是完整图像的ImageDataGenerator

        ImageDataGenerator that outputs patches instead of full image(输出补丁而不是完整图像的ImageDataGenerator)
        <i id='l2Mcv'><tr id='l2Mcv'><dt id='l2Mcv'><q id='l2Mcv'><span id='l2Mcv'><b id='l2Mcv'><form id='l2Mcv'><ins id='l2Mcv'></ins><ul id='l2Mcv'></ul><sub id='l2Mcv'></sub></form><legend id='l2Mcv'></legend><bdo id='l2Mcv'><pre id='l2Mcv'><center id='l2Mcv'></center></pre></bdo></b><th id='l2Mcv'></th></span></q></dt></tr></i><div id='l2Mcv'><tfoot id='l2Mcv'></tfoot><dl id='l2Mcv'><fieldset id='l2Mcv'></fieldset></dl></div>
          <bdo id='l2Mcv'></bdo><ul id='l2Mcv'></ul>
            <tbody id='l2Mcv'></tbody>

                <small id='l2Mcv'></small><noframes id='l2Mcv'>

              1. <legend id='l2Mcv'><style id='l2Mcv'><dir id='l2Mcv'><q id='l2Mcv'></q></dir></style></legend>
                • <tfoot id='l2Mcv'></tfoot>
                  本文介绍了输出补丁而不是完整图像的ImageDataGenerator的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

                  问题描述

                  我有一个很大的数据集,我想用它来训练带有Kera的CNN(太大了,无法将其加载到内存中)。我总是使用ImageDataGenerator.flow_from_dataframe进行培训,因为我将图像放在不同的目录中,如下所示。

                  datagen = ImageDataGenerator(
                      rescale=1./255.
                  )
                  train_gen=datagen.flow_from_dataframe(
                      dataframe=train_df),
                      x_col="filepath",
                      class_mode="input",
                      shuffle=True,
                      seed=1)
                  
                  但是,这一次我不想使用完整的映像,而是使用映像的随机补丁,即,我希望选择一个随机映像,并每次随机获取该映像的32x32的补丁。我如何才能做到这一点?

                  我想过使用tf.extract_image_patchessklearn.feature_extraction.image.extract_patches_2d,但我不知道是否可以将它们集成到flow_from_dataframe中。

                  如有任何帮助,我们将不胜感激。

                  推荐答案

                  您可以尝试使用ImageDataGeneratortf.image.extract_patches结合使用的预处理函数:

                  import tensorflow as tf
                  import matplotlib.pyplot as plt
                  
                  BATCH_SIZE = 32
                  
                  def get_patches():
                      def _get_patches(image):
                              image = tf.expand_dims(image,0)
                              patches = tf.image.extract_patches(images=image,
                                                      sizes=[1, 32, 32, 1],
                                                      strides=[1, 32, 32, 1],
                                                      rates=[1, 1, 1, 1],
                                                      padding='VALID')
                  
                              patches = tf.reshape(patches, (1, 256, 256, 3))
                              return patches
                      return _get_patches
                  
                  def reshape_data(images, labels):
                        ta = tf.TensorArray(tf.float32, size=0, dynamic_size=True)
                        for b in tf.range(BATCH_SIZE):
                          i = tf.random.uniform((), maxval=int(256/32), dtype=tf.int32)
                          j = tf.random.uniform((), maxval=int(256/32), dtype=tf.int32)
                          patched_image = tf.reshape(images[b], (8, 8, 3072))
                          ta = ta.write(ta.size(), tf.reshape(patched_image[i, j], shape=(32, 32 ,3)))
                        return ta.stack(), labels
                  
                  preprocessing = get_patches()
                  flowers = tf.keras.utils.get_file(
                      'flower_photos',
                      'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
                      untar=True)
                  
                  img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20, preprocessing_function = preprocessing)
                  
                  
                  ds = tf.data.Dataset.from_generator(
                      lambda: img_gen.flow_from_directory(flowers, batch_size=BATCH_SIZE, shuffle=True),
                      output_types=(tf.float32, tf.float32))
                  
                  ds = ds.map(reshape_data)
                  images, _ = next(iter(ds.take(1)))
                  
                  image = images[0] # (32, 32, 3)
                  
                  plt.imshow(image.numpy())
                  
                  问题是ImageDataGeneratorpreprocessing_function需要与输入形状相同的输出形状。因此,我首先创建面片,并基于面片构建与原始图像相同的输出形状。稍后,在reshape_data方法中,我将图像从(256,256,3)重塑到(8,8,3072),提取一个随机面片,然后将其与形状(32,32,3)一起返回。

                  这篇关于输出补丁而不是完整图像的ImageDataGenerator的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

                  本站部分内容来源互联网,如果有图片或者内容侵犯了您的权益,请联系我们,我们会在确认后第一时间进行删除!

                  相关文档推荐

                  groupby multiple coords along a single dimension in xarray(在xarray中按单个维度的多个坐标分组)
                  Group by and Sum in Pandas without losing columns(Pandas中的GROUP BY AND SUM不丢失列)
                  Group by + New Column + Grab value former row based on conditionals(GROUP BY+新列+基于条件的前一行抓取值)
                  Groupby and interpolate in Pandas(PANDA中的Groupby算法和插值算法)
                  Pandas - Group Rows based on a column and replace NaN with non-null values(PANAS-基于列对行进行分组,并将NaN替换为非空值)
                  Grouping pandas DataFrame by 10 minute intervals(按10分钟间隔对 pandas 数据帧进行分组)
                    <tbody id='vc1Tt'></tbody>
                  <i id='vc1Tt'><tr id='vc1Tt'><dt id='vc1Tt'><q id='vc1Tt'><span id='vc1Tt'><b id='vc1Tt'><form id='vc1Tt'><ins id='vc1Tt'></ins><ul id='vc1Tt'></ul><sub id='vc1Tt'></sub></form><legend id='vc1Tt'></legend><bdo id='vc1Tt'><pre id='vc1Tt'><center id='vc1Tt'></center></pre></bdo></b><th id='vc1Tt'></th></span></q></dt></tr></i><div id='vc1Tt'><tfoot id='vc1Tt'></tfoot><dl id='vc1Tt'><fieldset id='vc1Tt'></fieldset></dl></div>
                • <tfoot id='vc1Tt'></tfoot>

                    <bdo id='vc1Tt'></bdo><ul id='vc1Tt'></ul>

                    1. <legend id='vc1Tt'><style id='vc1Tt'><dir id='vc1Tt'><q id='vc1Tt'></q></dir></style></legend>

                      <small id='vc1Tt'></small><noframes id='vc1Tt'>