[TOC]
Java实现文件分片上传
为什么要使用分片上传
在需要上传文件时,不可避免地会遇到上传文件内容过大,上传时间太长地问题,采用文件分片上传就可以解决这个问题。
什么是分片上传?
简单的说就是本来是需要一次搬一个很大的东西,比如是一大桶水,一次搬起来比较费事费力。我们可以把这一大桶水分装在几个或几十个或者更多的小瓶里,这样搬运起来就比较省力,也比较方便,等到目的地后,我们在将这些小瓶子里的水都倒回大桶里,这样就完成了一大桶水的搬运工作。这个将一大桶水分成许多小瓶子的过程就是分片的过程,最后将水倒回大桶的过程就是合并的过程。分片与合并也是文件分片上传的重要过程。
前后端代码
这个分片的过程是在前端实现的,上传完成后的合并工作是后端完成的。
前端代码:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96
| <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>JS分片上传-极速上传</title> </head> <body> <input type="file" name="slice" id="slice" > <br/> </body> <script src="http://libs.baidu.com/jquery/1.8.3/jquery.min.js"></script> <script type="text/javascript"> $("#slice").change(function(event) { var file = $("#slice")[0].files[0]; PostFile(file,0); }); function PostFile(file,i, uuid){ var name = file.name, size = file.size, shardSize = 10 * 1024 * 1024, shardCount = Math.ceil(size / shardSize); if(i >= shardCount){ return; } if (uuid === null || uuid === undefined) { uuid = guid(); } var start = i * shardSize; var end = start + shardSize; var packet = file.slice(start, end); var form = new FormData(); form.append("uuid", uuid); form.append("data", packet); form.append("name", name); form.append("totalSize", size); form.append("total", shardCount); form.append("index", i + 1); $.ajax({ url: "http://127.0.0.1:8080/index/doPost", type: "POST", data: form, async: true, dataType:"json", processData: false, contentType: false, success: function (msg) { console.log(msg); if (msg.status === 201) { form = ''; i++; PostFile(file, i, uuid); } else if (msg.status === 502) { form = ''; setInterval(function () { PostFile(file, i, uuid) }, 2000); } else if (msg.status === 200) { merge(uuid, name) console.log("上传成功"); } else if (msg.status === 500) { console.log('第'+msg.i+'次,上传文件有误!'); } else { console.log('未知错误'); } } }) }
function merge(uuid, fileName) { $.ajax({ url: "http://127.0.0.1:8080/index/merge", type: "GET", data: {uuid: uuid, newFileName: fileName}, async: true, dataType:"json", success: function (msg) { console.log(msg); } }) }
function guid() { return 'xxxxxxxxxxxx4xxxyxxxxxxxxxxxxxxx'.replace(/[xy]/g, function (c) { var r = Math.random() * 16 | 0, v = c === 'x' ? r : (r & 0x3 | 0x8); return v.toString(16); }); } </script> </html>
|
后端代码:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129
| import org.apache.commons.io.FileUtils; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.ResponseBody; import org.springframework.web.multipart.MultipartFile; import org.springframework.web.multipart.MultipartHttpServletRequest;
import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.File; import java.io.IOException; import java.io.RandomAccessFile; import java.util.Arrays; import java.util.HashMap; import java.util.Map;
@Controller @RequestMapping("/index") public class test_controller { private static final String fileUploadTempDir = "D:/portalupload/fileuploaddir"; private static final String fileUploadDir = "D:/portalupload/file"; @RequestMapping("/doPost") @ResponseBody public Map fragmentation(HttpServletRequest req, HttpServletResponse resp) { resp.addHeader("Access-Control-Allow-Origin", "*"); Map<String, Object> map = new HashMap<>();
MultipartHttpServletRequest multipartRequest = (MultipartHttpServletRequest) req;
MultipartFile file = multipartRequest.getFile("data");
int index = Integer.parseInt(multipartRequest.getParameter("index"));
int total = Integer.parseInt(multipartRequest.getParameter("total"));
String fileName = multipartRequest.getParameter("name"); String name = fileName.substring(0, fileName.lastIndexOf(".")); String fileEnd = fileName.substring(fileName.lastIndexOf("."));
String uuid = multipartRequest.getParameter("uuid");
File uploadFile = new File(fileUploadTempDir + "/" + uuid, uuid + name + index + ".tem");
if (!uploadFile.getParentFile().exists()) { uploadFile.getParentFile().mkdirs(); }
if (index < total) { try { file.transferTo(uploadFile); map.put("status", 201); return map; } catch (IOException e) { e.printStackTrace(); map.put("status", 502); return map; } } else { try { file.transferTo(uploadFile); map.put("status", 200); return map; } catch (IOException e) { e.printStackTrace(); map.put("status", 502); return map; } } } @RequestMapping(value = "/merge", method = RequestMethod.GET) @ResponseBody public Map merge(String uuid, String newFileName) { System.out.println(newFileName); Map retMap = new HashMap(); try { File dirFile = new File(fileUploadTempDir + "/" + uuid); if (!dirFile.exists()) { throw new RuntimeException("文件不存在!"); } String[] fileNames = dirFile.list(); String name = newFileName.substring(0, newFileName.lastIndexOf(".")); Arrays.sort(fileNames, (o1,o2)->{ int i1 = Integer.parseInt(o1.substring(o1.indexOf(name)+name.length()).split("\\.tem")[0]); int i2 = Integer.parseInt(o2.substring(o2.indexOf(name)+name.length()).split("\\.tem")[0]); return i1 - i2; });
File targetFile = new File(fileUploadDir, newFileName); if (!targetFile.getParentFile().exists()) { targetFile.getParentFile().mkdirs(); } RandomAccessFile writeFile = new RandomAccessFile(targetFile, "rw");
long position = 0; for (String fileName : fileNames) { System.out.println(fileName); File sourceFile = new File(fileUploadTempDir + "/" + uuid, fileName); RandomAccessFile readFile = new RandomAccessFile(sourceFile, "rw"); int chunksize = 1024 * 3; byte[] buf = new byte[chunksize]; writeFile.seek(position); int byteCount; while ((byteCount = readFile.read(buf)) != -1) { if (byteCount != chunksize) { byte[] tempBytes = new byte[byteCount]; System.arraycopy(buf, 0, tempBytes, 0, byteCount); buf = tempBytes; } writeFile.write(buf); position = position + byteCount; } readFile.close(); FileUtils.deleteQuietly(sourceFile); } writeFile.close(); retMap.put("code", "200"); }catch (IOException e){ e.printStackTrace(); retMap.put("code", "500"); } return retMap; } }
|
测试:

可以看到文件进行了分片上传,我是上传了一个系统镜像文件,4GB多,上传的也是非常的快。