Save website as PDF with date

Since when citing a website you cite the way website was on a certain day, it would be good if Paperpile could save a PDF or screen capture of the website when you choose the “Add as a website reference” option. Personally I really like the way the Evernote web clipper saves and archives a simplified view of a webpage, but I find the process of creating an Evernote clip, getting a share URL and then entering that in Paperpile rather cumbersome. Both Zotero and Sente are able to save a version of the website directly to the citation and I think Paperpile should as well.

I think this is an already existing feature request made by someone else, but Paperpile also needs to save the access-date when doing this, in addition to the publication date, as it is required by many bibliographic formats.

27 Likes

+1 I’d also love this feature.

2 Likes

I would very much like this feature to, please: this would avoid rotting links when the website goes dark – avoiding losing the content. That would be very useful!

1 Like

I would really like this too. Right now I have to print the webpage to a PDF, then drag and drop it to the reference in Paperpile.

1 Like

And a yes please to this feature from me too

1 Like

I’d love this too. Now I print a website to PDF and add it as a file. Zotero has this functionality, calling it “snapshots”.

1 Like

Do you automatically get a PDF in Zotero, or is it saved as HTML?

No, it’s not a pdf indeed, but a html. It does include images, styles, etc. I’d prefer a pdf, but I guess saving a website to pdf is not straightforward, and html does the trick. For me the most important thing is to have a backup copy in case the website goes down, and to know exactly what a website looked like on a certain date.

1 Like

+1 for this feature. Saving as pdf or MHTML single file instead of zotero way which is to save legacy MHTML with related directory to save non-html external resources, images, scripts, etc. PDF is always better as long as it is readable format with print-friendly css.

Samir

+1 from me too.

In relation to this request, or perhaps partly to satisfy it, it would be great if Paperpile could be somehow integrated with WebCite.

The dream would be that Paperpile – all from simply adding a webpage as a reference – would:

  1. Archive the URL in WebCite,
  2. Save the archived URL as the reference in Paperpile,
  3. Create (and save in Paperpile) a PDF version of the archived webpage,
  4. Serious dreaming now, but, if Paperpile could work with WebCite to generate a DOI for the archived webpage!

That would be a feature seriously worth paying for. Citing webpages (blogs etc) is becoming increasingly important, and this would be a really excellent way of doing it.

4 Likes

I think that would be a fantastic feature needed by most people.

I am not sure whether method mentioned is also saves webpage with date and also can it save with the images available in the webpage. Please suggest

save webpage as pdf

Where do you store the pdf file that you save and link?

+1! Readcube used to do this too, which was terribly convenient. Even better if you could make a permanent URL to the PDF screenshot for sharing.

This thread seems to have gone dark. I am a new subscriber to paperpile. It’s wonderful, with this huge exception of not archiving web pages. Has this gone anywhere over the past couple of years, or does it remain an unacknowledged feature request?

We do not currently have plans for this feature; we have limited resources and other features - such as the mobile app and the Word integration have taken priority. That said, we understand the utility of such an automated process and encourage users who would like this feature to voice their interest and explain their use cases in this thread.

I certainly understand that this might not be top priority for Paperpile, esp. given its academic roots and focus, focus on a mobile app, and small staff. I am an entrepreneur and am constantly researching new topics for presentations, reports, white papers and other documents. Many of the ideas that I pull come from non-academic sources, and I get a tremendous amount of research and inspiration from web-based sources. If a site changes or pages disappear, the value of citation is zero. It is currently a cumbersome process to save a page or print it into pdf format, then add the citations. This is clearly an area where Evernote is exceptionally strong; however, their system is not at all geared to reference work, and I frequently get lost without details on document sources.

Adding the ability to pull in html/web-based material would make Paperpile a killer app outside of the academic world.

3 Likes

I agree with you completely and would add that academics frequently rely on non-academic primary and secondary sources when researching and writing. I can’t believe that no-one has tackled this as of 2019. Evernote development team maintain a great web clipper, but are blind to the writing, citing and referencing workflow.

Paperpile already does reference material metadata clipping, automatic sourcing of full text PDFs and inline citations incredibly well – better than the competition – if they add the capability to clip clutter-free HTML pages and/or clip to PDF they would potentially create a market leading opportunity to steal tens of thousands of Evernote Premium subscribers, as well as significant market share from other academic referencing tools like Endnote, Mendeley, Papers and Zotero.

2 Likes

You are spot on. There is a very big market opportunity here. I hope that paperpile’s founders take note! There are many precedents of other applications that sit on top of Google, and others, that have killed it by being more flexible, easier, lighter, and inherently cloud-based.

3 Likes

I create a pdf of the webpage and save it in Evernote. I make a notation in Paperpile of the date I created the pdf and the Notebook I saved it in. This isn’t ideal - I sometimes return to a website, and don’t always remember to create a pdf or screenshot every time.

Government and Congressional websites are constantly changing, and lately data is being removed and archived so this is becoming a critical task for me.