RubyPDF Blog tutorial How to Use Google App Engine UrlFetch API to download the files over 1M

How to Use Google App Engine UrlFetch API to download the files over 1M

Nick Johnson said,

Currently, API calls are limited to 1MB, but requests and responses are limited to 10M. If you want to permit larger files, you could split them up into chunks and store them in the datastore. The 30 second request limit applies only to the time your code spends processing the request, not time sent receiving the request or sending the response.
A large file API hosted pelias service is on our roadmap, which will make handling large files from users much easier.

I offer the UrlFetch function in my PDF Password Remover Online application, but I do not want to let it only manipulate no more 1M PDF, after some study, I got the solution, let UrlFetch API download no more 1M data each time, but repeat many times until all data downloaded, of course, there still a limit, 30 second request limit.

public static byte[] download(String url) throws IOException, InterruptedException {

ArrayList<byte[]> al=new ArrayList<byte[]>();

HttpURLConnection httpConn = null;
int seg=1024*1000;
int startPosition=0;

try {
URL u = new URL(url);

int endPosition=startPosition+seg;

httpConn = (HttpURLConnection) u.openConnection();
httpConn.setRequestProperty(“User-Agent”, “Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv: Gecko/20080404 Firefox/”);
httpConn.setRequestProperty(“Range”, “bytes=” + startPosition + “-“+endPosition);

InputStream in = httpConn.getInputStream();
byte[] b=Util.toByteArray(in);//IOUtils.toByteArray(in);
ArrayList temp = new ArrayList();
for(byte[] b : al)
for (int i = 0; i<b.length; i++){
return Util.saveBytesArrayListTobytesArray(temp);
} finally {

1 thought on “How to Use Google App Engine UrlFetch API to download the files over 1M”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.