产品目录

序言

依照我的另一则该文: 怎样将相片亮化,DPED机器学习开放源码工程项目加装采用 | 机器学习_安中藩的网志-CSDN网志

辨认出DPED工程项目须要指示化继续执行且须要相片产品目录做为处置源。我却是把工程项目魔改了呵呵,精简工程项目内部结构,分拆到两个python标识符中。

数学模型文档较为大,我提供更多了浏览门牌号,浏览后放到工程项目相关联产品目录方可。

Github库房门牌号: github门牌号

工程项目表明

工程项目内部结构

看呵呵工程项目的内部结构,如下表所示图:

自动草稿

当中数学模型文档浏览门牌号:https://pan.baidu.com/s/1IUm8xz5dhh8iW_bLWfihPQ 抽取码:TUAN

将imagenet-vgg-verydeep-19.mat浏览后放到vgg_pretrained产品目录中。

自然环境倚赖能间接参照: Python同时实现代替相片人物形象大背景,精巧到细丝(附有标识符) | 机器学习_安中藩的网志-CSDN网志

统计数据预备

我预备了两张试验图如下表所示:

自动草稿

魔改标识符

不废话了,上核心标识符。

!/usr/bin/env python-*- coding: utf-8 -*-@Time : 2021/11/27 13:48@Author : 剑客安中藩_ALiang@Site :@File : dped.pypython test_model.py model=iphone_orig dped_dir=dped/ test_subset=fulliteration=all resolution=orig use_gpu=trueimportimageiofromPILimportImageimportnumpyasnpimporttensorflowastfimportosimportsysimportscipy.statsasstimportuuidfromfunctoolsimportreduce———————- hy add 2 ———————-deflog10(x):numerator = tf.compat.v1.log(x)
denominator = tf.compat.v1.log(tf.constant(10, dtype=numerator.dtype))returnnumerator / denominatordef_tensor_size(tensor):fromoperatorimportmulreturnreduce(mul, (d.valuefordintensor.get_shape()[1:]),1)defgauss_kernel(kernlen=21, nsig=3, channels=1):interval = (2* nsig 1.) / (kernlen)
x = np.linspace(-nsig – interval /2., nsig interval /2., kernlen 1)
kern1d = np.diff(st.norm.cdf(x))
kernel_raw = np.sqrt(np.outer(kern1d, kern1d))
kernel = kernel_raw / kernel_raw.sum()
out_filter = np.array(kernel, dtype=np.float32)
out_filter = out_filter.reshape((kernlen, kernlen,1,1))
out_filter = np.repeat(out_filter, channels, axis=2)returnout_filterdefblur(x):kernel_var = gauss_kernel(21,3,3)returntf.nn.depthwise_conv2d(x, kernel_var, [1,1,1,1], padding=SAME)defprocess_command_args(arguments):specifying default parametersbatch_size =50train_size =30000learning_rate =5e-4num_train_iters =20000w_content =10w_color =0.5w_texture =1w_tv =2000dped_dir =dped/vgg_dir =vgg_pretrained/imagenet-vgg-verydeep-19.mateval_step =1000phone =“”forargsinarguments:ifargs.startswith(“model”):
phone = args.split(“=”)[1]ifargs.startswith(“batch_size”):
batch_size = int(args.split(“=”)[1])ifargs.startswith(“train_size”):
train_size = int(args.split(“=”)[1])ifargs.startswith(“learning_rate”):
learning_rate = float(args.split(“=”)[1])ifargs.startswith(“num_train_iters”):
num_train_iters = int(args.split(“=”)[1])———————————–ifargs.startswith(“w_content”):
w_content = float(args.split(“=”)[1])ifargs.startswith(“w_color”):
w_color = float(args.split(“=”)[1])ifargs.startswith(“w_texture”):
w_texture = float(args.split(“=”)[1])ifargs.startswith(“w_tv”):
w_tv = float(args.split(“=”)[1])———————————–ifargs.startswith(“dped_dir”):
dped_dir = args.split(“=”)[1]ifargs.startswith(“vgg_dir”):
vgg_dir = args.split(“=”)[1]ifargs.startswith(“eval_step”):
eval_step = int(args.split(“=”)[1])ifphone ==“”:
print(“\nPlease specify the camera model by running the script with the following parameter:\n”)
print(“python train_model.py model={iphone,blackberry,sony}\n”)
sys.exit()ifphonenotin[“iphone”,“sony”,“blackberry”]:
print(“\nPlease specify the correct camera model:\n”)
print(“python train_model.py model={iphone,blackberry,sony}\n”)
sys.exit()

print(“\nThe following parameters will be applied for CNN training:\n”)

print(“Phone model:”, phone)
print(“Batch size:”, batch_size)
print(“Learning rate:”, learning_rate)
print(“Training iterations:”, str(num_train_iters))
print()
print(“Content loss:”, w_content)
print(“Color loss:”, w_color)
print(“Texture loss:”, w_texture)
print(“Total variation loss:”, str(w_tv))
print()
print(“Path to DPED dataset:”, dped_dir)
print(“Path to VGG-19 network:”, vgg_dir)
print(“Evaluation step:”, str(eval_step))
print()returnphone, batch_size, train_size, learning_rate, num_train_iters, \
w_content, w_color, w_texture, w_tv, \
dped_dir, vgg_dir, eval_stepdefprocess_test_model_args(arguments):phone =“”dped_dir =dped/test_subset =“small”iteration =“all”resolution =“orig”use_gpu =“true”forargsinarguments:ifargs.startswith(“model”):
phone = args.split(“=”)[1]ifargs.startswith(“dped_dir”):
dped_dir = args.split(“=”)[1]ifargs.startswith(“test_subset”):
test_subset = args.split(“=”)[1]ifargs.startswith(“iteration”):
iteration = args.split(“=”)[1]ifargs.startswith(“resolution”):
resolution = args.split(“=”)[1]ifargs.startswith(“use_gpu”):
use_gpu = args.split(“=”)[1]ifphone ==“”:
print(“\nPlease specify the model by running the script with the following parameter:\n”)
print(“python test_model.py model={iphone,blackberry,sony,iphone_orig,blackberry_orig,sony_orig}\n”)
sys.exit()returnphone, dped_dir, test_subset, iteration, resolution, use_gpudefget_resolutions():IMAGE_HEIGHT, IMAGE_WIDTHres_sizes = {}

res_sizes[“iphone”] = [1536,2048]
res_sizes[“iphone_orig”] = [1536,2048]
res_sizes[“blackberry”] = [1560,2080]
res_sizes[“blackberry_orig”] = [1560,2080]
res_sizes[“sony”] = [1944,2592]
res_sizes[“sony_orig”] = [1944,2592]
res_sizes[“high”] = [1260,1680]
res_sizes[“medium”] = [1024,1366]
res_sizes[“small”] = [768,1024]
res_sizes[“tiny”] = [600,800]returnres_sizesdefget_specified_res(res_sizes, phone, resolution):ifresolution ==“orig”:
IMAGE_HEIGHT = res_sizes[phone][0]
IMAGE_WIDTH = res_sizes[phone][1]else:
IMAGE_HEIGHT = res_sizes[resolution][0]
IMAGE_WIDTH = res_sizes[resolution][1]

IMAGE_SIZE = IMAGE_WIDTH * IMAGE_HEIGHT *3returnIMAGE_HEIGHT, IMAGE_WIDTH, IMAGE_SIZEdefextract_crop(image, resolution, phone, res_sizes):ifresolution ==“orig”:returnimageelse:

x_up = int((res_sizes[phone][1] – res_sizes[resolution][1]) /2)
y_up = int((res_sizes[phone][0] – res_sizes[resolution][0]) /2)

x_down = x_up res_sizes[resolution][1]
y_down = y_up res_sizes[resolution][0]returnimage[y_up: y_down, x_up: x_down, :]———————- hy add 1 ———————-defresnet(input_image):withtf.compat.v1.variable_scope(“generator”):
W1 = weight_variable([9,9,3,64], name=“W1”)
b1 = bias_variable([64], name=“b1”)
c1 = tf.nn.relu(conv2d(input_image, W1) b1)residual 1W2 = weight_variable([3,3,64,64], name=“W2”)
b2 = bias_variable([64], name=“b2”)
c2 = tf.nn.relu(_instance_norm(conv2d(c1, W2) b2))

W3 = weight_variable([3,3,64,64], name=“W3”)
b3 = bias_variable([64], name=“b3”)
c3 = tf.nn.relu(_instance_norm(conv2d(c2, W3) b3)) c1residual 2W4 = weight_variable([3,3,64,64], name=“W4”)
b4 = bias_variable([64], name=“b4”)
c4 = tf.nn.relu(_instance_norm(conv2d(c3, W4) b4))

W5 = weight_variable([3,3,64,64], name=“W5”)
b5 = bias_variable([64], name=“b5”)
c5 = tf.nn.relu(_instance_norm(conv2d(c4, W5) b5)) c3residual 3W6 = weight_variable([3,3,64,64], name=“W6”)
b6 = bias_variable([64], name=“b6”)
c6 = tf.nn.relu(_instance_norm(conv2d(c5, W6) b6))

W7 = weight_variable([3,3,64,64], name=“W7”)
b7 = bias_variable([64], name=“b7”)
c7 = tf.nn.relu(_instance_norm(conv2d(c6, W7) b7)) c5residual 4W8 = weight_variable([3,3,64,64], name=“W8”)
b8 = bias_variable([64], name=“b8”)
c8 = tf.nn.relu(_instance_norm(conv2d(c7, W8) b8))

W9 = weight_variable([3,3,64,64], name=“W9”)
b9 = bias_variable([64], name=“b9”)
c9 = tf.nn.relu(_instance_norm(conv2d(c8, W9) b9)) c7ConvolutionalW10 = weight_variable([3,3,64,64], name=“W10”)
b10 = bias_variable([64], name=“b10”)
c10 = tf.nn.relu(conv2d(c9, W10) b10)

W11 = weight_variable([3,3,64,64], name=“W11”)
b11 = bias_variable([64], name=“b11”)
c11 = tf.nn.relu(conv2d(c10, W11) b11)FinalW12 = weight_variable([9,9,64,3], name=“W12”)
b12 = bias_variable([3], name=“b12”)
enhanced = tf.nn.tanh(conv2d(c11, W12) b12) *0.58 0.5returnenhanceddefadversarial(image_):withtf.compat.v1.variable_scope(“discriminator”):
conv1 = _conv_layer(image_,48,11,4, batch_nn=False)
conv2 = _conv_layer(conv1,128,5,2)
conv3 = _conv_layer(conv2,192,3,1)
conv4 = _conv_layer(conv3,192,3,1)
conv5 = _conv_layer(conv4,128,3,2)

flat_size =128*7*7conv5_flat = tf.reshape(conv5, [-1, flat_size])

W_fc = tf.Variable(tf.compat.v1.truncated_normal(
[flat_size,1024], stddev=0.01))
bias_fc = tf.Variable(tf.constant(0.01, shape=[1024]))

fc = leaky_relu(tf.matmul(conv5_flat, W_fc) bias_fc)

W_out = tf.Variable(
tf.compat.v1.truncated_normal([1024,2], stddev=0.01))
bias_out = tf.Variable(tf.constant(0.01, shape=[2]))

adv_out = tf.nn.softmax(tf.matmul(fc, W_out) bias_out)returnadv_outdefweight_variable(shape, name):initial = tf.compat.v1.truncated_normal(shape, stddev=0.01)returntf.Variable(initial, name=name)defbias_variable(shape, name):initial = tf.constant(0.01, shape=shape)returntf.Variable(initial, name=name)defconv2d(x, W):returntf.nn.conv2d(x, W, strides=[1,1,1,1], padding=SAME)defleaky_relu(x, alpha=0.2):returntf.maximum(alpha * x, x)def_conv_layer(net, num_filters, filter_size, strides, batch_nn=True):weights_init = _conv_init_vars(net, num_filters, filter_size)
strides_shape = [1, strides, strides,1]
bias = tf.Variable(tf.constant(0.01, shape=[num_filters]))

net = tf.nn.conv2d(net, weights_init, strides_shape, padding=SAME) bias
net = leaky_relu(net)ifbatch_nn:
net = _instance_norm(net)returnnetdef_instance_norm(net):batch, rows, cols, channels = [i.valueforiinnet.get_shape()]
var_shape = [channels]

mu, sigma_sq = tf.compat.v1.nn.moments(net, [1,2], keepdims=True)
shift = tf.Variable(tf.zeros(var_shape))
scale = tf.Variable(tf.ones(var_shape))

epsilon =1e-3normalized = (net – mu) / (sigma_sq epsilon) ** (.5)returnscale * normalized shiftdef_conv_init_vars(net, out_channels, filter_size, transpose=False):_, rows, cols, in_channels = [i.valueforiinnet.get_shape()]ifnottranspose:
weights_shape = [filter_size, filter_size, in_channels, out_channels]else:
weights_shape = [filter_size, filter_size, out_channels, in_channels]

weights_init = tf.Variable(
tf.compat.v1.truncated_normal(
weights_shape,
stddev=0.01,
seed=1),
dtype=tf.float32)returnweights_init———————- hy add 0 ———————-defbeautify(pic_path: str, output_dir: str, gpu=1):tf.compat.v1.disable_v2_behavior()process command argumentsphone =“iphone_orig”test_subset =“full”iteration =“all”resolution =“orig”get all available image resolutionsres_sizes = get_resolutions()get the specified image resolutionIMAGE_HEIGHT, IMAGE_WIDTH, IMAGE_SIZE = get_specified_res(
res_sizes, phone, resolution)ifgpu ==1:
use_gpu =trueelse:
use_gpu =falsedisable gpu if specifiedconfig = tf.compat.v1.ConfigProto(
device_count={GPU:0})ifuse_gpu ==“false”elseNonecreate placeholders for input imagesx_ = tf.compat.v1.placeholder(tf.float32, [None, IMAGE_SIZE])
x_image = tf.reshape(x_, [-1, IMAGE_HEIGHT, IMAGE_WIDTH,3])generate enhanced imageenhanced = resnet(x_image)withtf.compat.v1.Session(config=config)assess:test_dir = dped_dir phone.replace(“_orig”,“”) “/test_data/full_size_test_images/”test_photos = [f for f in os.listdir(test_dir) if os.path.isfile(test_dir f)]test_photos = [pic_path]iftest_subset ==“small”:use five first images onlytest_photos = test_photos[0:5]ifphone.endswith(“_orig”):load pre-trained modelsaver = tf.compat.v1.train.Saver()
saver.restore(sess,“models_orig/” phone)forphotointest_photos:load training image and crop it if necessarynew_pic_name = uuid.uuid4()
print(“Testing original “
phone.replace(“_orig”,“”) ” model, processing image “
photo)
image = np.float16(np.array(
Image.fromarray(imageio.imread(photo)).resize([res_sizes[phone][1], res_sizes[phone][0]]))) /255image_crop = extract_crop(
image, resolution, phone, res_sizes)
image_crop_2d = np.reshape(image_crop, [1, IMAGE_SIZE])get enhanced imageenhanced_2d = sess.run(enhanced, feed_dict={x_: image_crop_2d})
enhanced_image = np.reshape(
enhanced_2d, [IMAGE_HEIGHT, IMAGE_WIDTH,3])

before_after = np.hstack((image_crop, enhanced_image))
photo_name = photo.rsplit(“.”,1)[0]save the results as .png imagesimageio.imwrite(“visual_results/” phone “_” photo_name “_enhanced.png”,enhanced_image)imageio.imwrite(os.path.join(output_dir,{}.png.format(new_pic_name)), enhanced_image)imageio.imwrite(“visual_results/” phone “_” photo_name “_before_after.png”,before_after)imageio.imwrite(os.path.join(output_dir,{}_before_after.png.format(new_pic_name)), before_after)returnos.path.join(output_dir,{}.png.format(new_pic_name))else:
num_saved_models = int(len([fforfinos.listdir(“models_orig/”)iff.startswith(phone “_iteration”)]) /2)ifiteration ==“all”:
iteration = np.arange(1, num_saved_models) *1000else:
iteration = [int(iteration)]foriiniteration:load pre-trained modelsaver = tf.compat.v1.train.Saver()
saver.restore(
sess,“models_orig/”
phone “_iteration_”
str(i) “.ckpt”)forphotointest_photos:load training image and crop it if necessarynew_pic_name = uuid.uuid4()
print(“iteration “ str(i) “, processing image “ photo)
image = np.float16(np.array(
Image.fromarray(imageio.imread(photo)).resize(
[res_sizes[phone][1], res_sizes[phone][0]]))) /255image_crop = extract_crop(
image, resolution, phone, res_sizes)
image_crop_2d = np.reshape(image_crop, [1, IMAGE_SIZE])get enhanced imageenhanced_2d = sess.run(enhanced, feed_dict={x_: image_crop_2d})
enhanced_image = np.reshape(
enhanced_2d, [IMAGE_HEIGHT, IMAGE_WIDTH,3])

before_after = np.hstack((image_crop, enhanced_image))
photo_name = photo.rsplit(“.”,1)[0]save the results as .png imagesimageio.imwrite(“visual_results/” phone “_” photo_name “_iteration_” str(i) “_enhanced.png”,enhanced_image)imageio.imwrite(os.path.join(output_dir,{}.png.format(new_pic_name)), enhanced_image)imageio.imwrite(“visual_results/” phone “_” photo_name “_iteration_” str(i) “_before_after.png”,before_after)imageio.imwrite(os.path.join(output_dir,{}_before_after.png.format(new_pic_name)), before_after)returnos.path.join(output_dir,{}.png.format(new_pic_name))if__name__ ==__main__:
print(beautify(C:/Users/yi/Desktop/6.jpg,result/))

标识符表明

1、beautify方法有3个参数,分别为相片门牌号、输出产品目录、是否采用gpu(默认采用)。

2、输出的相片有两张,为了文档名不重复,采用uuid做为文档名,后缀带_before_after为对比图。

3、没有对文档做校验,如果须要能自行添加。

验证呵呵效果

自动草稿
自动草稿

怎么样?很炫吧。

总结

研究这个工程项目却是很有意思的,记录与分享又是另一种快乐。以前以为自己是个开朗的人,现在细细翻阅自己的人生,更多的是宁静与孤独。以前曾看书看马尔克斯的一句话:在变老的路上,与其抗拒孤独,不如学会享受孤独。所以孤独没什么不好的,沉浸在程序的另两个世界里,就是另两个心情。

分享:

生命从来不曾离开过孤独而独立存在。无论是我们出生、我们成长、我们相爱却是我们成功失败,直到最后的最后,孤独犹如影子一样存在于生命一隅。——《马尔克斯》

原文https://blog.csdn.net/zhiweihongyan1/article/details/121586116

1.本站所有资源来源于用户上传和网络,如有侵权请邮件联系站长!
2.分享目的仅供大家学习和交流,您必须在下载后24小时内删除!
3.不得使用于非法商业用途,不得违反国家法律。否则后果自负!
4.本站提供的源码、模板、插件等其他资源,都不包含技术服务请大家谅解!
5.如有链接无法下载或失效,请联系管理员处理!
6.本站资源售价只是赞助,收取费用仅维持本站的日常运营所需!