DbKona and large resultsets

Hi,
I'm working on a application that will allow a user to run ad-hoc queries against
a database. These queries can return huge resultsets (between 50,000 & 450,000
rows). Per our requirements, we can't limit (through the database anyway) the
number of rows returned from a random query. In trying to deal with the large
number of results, I've been looking at the dbKona classes. I can't find a solution
with the regular JDBC stuff and CachedRowSet is just not meant to handle that
much data.
While running a test (based on an example given in the dbKona manual), I keep
running into out of memory issues. The code follows:
QueryDataSet qds = new TableDataSet(resultSet);
while (!qds.allRecordsRetrieved()) {
DataSet currentData = qds.fetchRecords(1000);
// Process the hundred records . . .
currentData.clearRecords();
currentData = null;
qds.clearRecords();
I'm currently not doing any processing with the records returned for this trivial
test. I just get them and clear them immediately to see if I can actually get
them all. On a resultset of about 45,000 rows I get an out of memory error about
halfway through the fetches. Are the records still being held in memory? Am
I doing something incorrectly?
Thanks for any help,
K Lewis

I think I found the problem. From some old test, the Statement object I made returned
a Scrollable ResultSet. I couldn't find a restriction on this immediately in
the docs (or maybe its just a problem of the Oracle driver?). As soon as I moved
back to the the default type of ResultSet (FORWARD_ONLY) I was able to process
150,000 records just fine.
"Sree Bodapati" <[email protected]> wrote:
Hi
Can you tell me what JDBC driver you are using?
sree
"K Lewis" <[email protected]> wrote in message
news:[email protected]..
Hi,
I'm working on a application that will allow a user to run ad-hoc queriesagainst
a database. These queries can return huge resultsets (between 50,000&
450,000
rows). Per our requirements, we can't limit (through the databaseanyway)
the
number of rows returned from a random query. In trying to deal withthe
large
number of results, I've been looking at the dbKona classes. I can'tfind
a solution
with the regular JDBC stuff and CachedRowSet is just not meant to handlethat
much data.
While running a test (based on an example given in the dbKona manual),I
keep
running into out of memory issues. The code follows:
QueryDataSet qds = new TableDataSet(resultSet);
while (!qds.allRecordsRetrieved()) {
DataSet currentData = qds.fetchRecords(1000);
// Process the hundred records . . .
currentData.clearRecords();
currentData = null;
qds.clearRecords();
I'm currently not doing any processing with the records returned forthis
trivial
test. I just get them and clear them immediately to see if I can actuallyget
them all. On a resultset of about 45,000 rows I get an out of memoryerror about
halfway through the fetches. Are the records still being held in memory?Am
I doing something incorrectly?
Thanks for any help,
K Lewis

Similar Messages

  • How do I handle large resultsets in CRXI without a performance issue?

    Hello -
    Problem Definition
    I have a performance problem displaying large/huge resultset of data on a crystal report.  The report takes about 4 minutes or more depending on the resultset size.
    How do you handle large resultsets in Crystal Reports without a performance issue?
    Environment
    Crystal Reports XI
    Apache WebSvr 2.X, Jboss 4.2.3, Struts
    Java Reporting Component (JRC),Crystal Report Viewer (CRV)
    Firefox
    DETAILS
    I use the CRXI thick client to build my report (.rpt) and then use it in my webapplication (webapp) under Jboss.
    User specifies the filter criteria to generate a report (date range etc) and submits the request to the webapp.  Webapp  queries the database, gets a "resultset".
    I initialize the JRC and CRV according to all the specifications and finally call the "processHttpRequest" method of Crystal Report Viewer to display the report on browser.
    So.....
    - Request received to generate a report with a filter criteria
    - Query DB to get resultset
    - Initialize JRC and CRV
    - finally display the report by calling
        reportViewer.processHttpRequest(request, response, request.getSession().getServletContext(), null);
    The performance problem is within the last step.  I put logs everywhere and noticed that database query doesnt take too long to return resultset.  Everything processes pretty quickly till I call the processHttpRequest of CRV.  This method just hangs for a long time before displaying the report on browser.
    CRV runs pretty fast when the resultset is smaller, but for large resultset it takes a long long time.
    I do have subreports and use Crystal report formulas on the reports.  Some of them are used for grouping also.  But I dont think Subreports is the real culprit here.  Because I have some other reports that dont have any subreports, and they too get really slow displaying large resultsets.
    Solutions?
    So obviously I need a good solution to this generic problem of "How do you handle large resultsets in Crystal Reports?"
    I have thought of some half baked ideas.
    A) Use external pagination and fetch data only for the current page being displayed.  But for this, CRXI must allow me to create my own buttons (previous, next, last), so I can control the click event and fetch data accordingly.  I tried capturing events by registering event handler "addToolbarCommandEventListener" of CRV.  But my listener gets invoked "after" processHttpRequest method completes, which doesnt help.
    Some how I need to be able to control the UI by adding my own previous page, next page, last page buttons and controlling it's click events. 
    B) Automagically have CRXI use a javascript functionality, to allow browser side page navigation.  So maybe the first time it'll take 5 mins to display the report, but once it's displayed, user can go to any page without sending the request back to server.
    C) Try using Crystal Reports 2008.  I'm open to using this version, but I couldnt figureout if it has any features that can help me do external pagination or anything that can handle large resultsets.
    D) Will using the Crystal Reports Servers like cache server/application server etc help in any way?  I read a little on the Crystal Page Viewer, Interactive Viewer, Part Viewer etc....but I'm not sure if any of these things are going to solve the issue.
    I'd appreciate it if someone can point me in the right direction.

    Essentialy the answer is use smaller resultsets or pull from the database directly instead of using resultsets.

  • Returning a Large ResultSet

    At the momoent we use our own queryTableModel to fetch data from the database. Although we use the traditional (looping on ResultSet.next())method of loading the data into a vector, we find that large ResultSets (1000+ rows) take a considerable amount of time to load into the vector.
    Is there a more efficient way of storing the ResultSet other than using a vector? We believe the addElement method constantly expanding the vector is the cause of the slowdown.
    Any tips appreciated.

    One more thing:
    We believe the addElement method constantly expanding the vector is the cause of the slowdown.You probably are rigth, but this is easy to avoid: both Vector and ArrayList have one constructor in which you can specify the initial size, so could save much time in growing the List, and in Vector class, as in another collection classes as HashSet, there is a constructor in which you can specify plus the initial size the loafFactor
    Abraham.

  • PL/SQL Dev query session lost on large resultsets after db update

    We have a problem with our PL/SQL Developer tool (www.allroundautomations.nl) since updating our Database.
    So far we had Oracle DB 10.1.0.5 Patch 2 on W2k3 and XP-Clients using Instant Client 10.1.0.5 and PL/SQL Developer 5.16 or 6.05 to query your DB. This scenario worked well.
    Now we upgraded to ORACLE 10G 10.1.0.5 PATCH 25 and now our PL/SQL Developer 5.16 or 6.05 (on IC 10.1.0.5) can logon the db and also query small tables. But as soon as the resultset reaches a certain size, the query on a table won't come to an end and is always showing "Executing...". We can only press "BREAK" what the results in a "ORA-12152: TNS: unable to send break message" and "ORA-03114: not connected to ORACLE".
    If i narrow the resultset down on the same table it works like before.
    If i watch the sessions on small resultset-queries, i see the corresponding session, but on large resultset-queries the session seem to close immediately.
    To solve this issue a already tried to install the newest PL/SQL developer 7.1.5(trail) or/and installing a newer instant client version (10.2.0.4), which both did not solve the problem.
    Is there a new option in 10.1.0.5 Patch 25 (or before) which closes sessions if the resultsets getting to large over a slower internet connection?
    btw. using sqlplus in the instantclient directory or even excel over odbc on the same client, returns the full resultset without problems. Could this be some kind of timeout problem ?
    Edit:
    Here is a snippet of the tracefile on the client right after Executing the select-statement. Some data seems to be retrieved and than it ends with these lines:
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 2D 20 49 6E 74 72 61 6E |-.Intran|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 65 74 2D 47 72 75 6E 64 |et-Grund|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 6C 61 67 65 6E 02 C1 04 |lagen...|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 02 C1 03 02 C1 0B 02 C1 |........|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 51 00 02 C1 03 02 C1 2D |Q......-|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 05 48 4B 4F 50 50 01 80 |.HKOPP..|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 03 3E 64 66 01 80 07 78 |.>df...x|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 65 0B 0F 01 01 01 07 76 |e......v|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: C7 01 01 09 01 01 07 76 |.......v|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: C7 01 01 18 01 01 07 78 |.......x|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 65 0B 0F 01 01 01 07 76 |e......v|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: C7 01 01 09 01 01 07 76 |.......v|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: C7 01 01 18 01 01 02 C1 |........|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 3B 02 C1 02 01 80 00 00 |;.......|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 00 00 00 00 00 00 00 00 |........|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 00 00 01 80 15 0C 00 |....... |
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: got NSPTDA packet
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: NSPTDA flags: 0x0
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: normal exit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: acquired the bit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: what=1, bl=2001
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: acquired the bit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: nsctxrnk=0
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nioqrc: exit
    (1992) [20-AUG-2008 17:13:00:953] nioqrc: entry
    (1992) [20-AUG-2008 17:13:00:953] nsdo: entry
    (1992) [20-AUG-2008 17:13:00:953] nsdo: cid=0, opcode=85, bl=0, what=0, uflgs=0x0, cflgs=0x3
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: acquired the bit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: rank=64, nsctxrnk=0
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: nsctx: state=8, flg=0x100400d, mvd=0
    (1992) [20-AUG-2008 17:13:00:953] nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: acquired the bit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: switching to application buffer
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: entry
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: recving a packet
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: entry
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: reading from transport...
    (1992) [20-AUG-2008 17:13:00:968] nttrd: entry
    Message was edited by:
    vhbtech

    Found nothing in the \bdump alert.log or \bdump trace files. I only have the DEFAULT profile and everything is set to UNLIMITED there.
    But the \udump generates a trace file the moment i execute the query:
    Dump file <path>\udump\<sid>ora4148.trc
    Fri Aug 22 09:12:18 2008
    ORACLE V10.1.0.5.0 - Production vsnsta=0
    vsnsql=13 vsnxtr=3
    Oracle Database 10g Release 10.1.0.5.0 - Production
    With the OLAP and Data Mining options
    Windows Server 2003 Version V5.2 Service Pack 2
    CPU : 2 - type 586, 1 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:898M/3071M, Ph+PgF:2675M/4967M, VA:812M/2047M
    Instance name: <SID>
    Redo thread mounted by this instance: 1
    Oracle process number: 33
    Windows thread id: 4148, image: ORACLE.EXE (SHAD)
    *** 2008-08-22 09:12:18.731
    *** ACTION NAME:(SQL Window - select * from stude) 2008-08-22 09:12:18.731
    *** MODULE NAME:(PL/SQL Developer) 2008-08-22 09:12:18.731
    *** SERVICE NAME:(<service-name>) 2008-08-22 09:12:18.731
    *** SESSION ID:(145.23131) 2008-08-22 09:12:18.731
    opitsk: network error occurred while two-task session server trying to send break; error code = 12152
    This trace is only generated if the query with a expected large resultset fails. If i narrow down the resultset no trace is written, well and the query then works of course.

  • Processing large Resultsets quickly or parallely

    How to process a large Resultset that contains a purchase entries of say 20 K users. Each user may have 1 or more purchase entries. The resultset is ordered by userid. And the other fields are itemname, quantity, price.
    Mine is a quad processor machine.
    Thanks.

    you're going to need to provide a lot more details. for instance, is the slow part reading the data from the database, or the processing that you are going to do on the data? if the former, then in order to do work in parallel, you probably need separate threads with their own resultsets. if the latter, then you could parallelize the work by having one thread read the resultset and push the data onto a shared work queue, from which multiple worker threads were reading. these are just a few of the possibilities.

  • On iOS 7.0.2 everytime I unlock my phone while my music is playing, it skips. Forward, backward, and by small and large amounts of time.

    On iOS 7.0.2 everytime I unlock my phone while my music is playing, it skips. Forward, backward, and by small and large amounts of time.

    Hi cowboyincognito,
    Thanks for visiting Apple Support Communities.
    If you don't have the option to play music by genre on your iPhone, try this step to find the Genre option:
    Browse your music library.
    You can browse your music by playlist, artist, or other category. For other browse options, tap More. Tap any song to play it.
    To rearrange the tabs in the Music app, tap More, then tap Edit and drag a button onto the one you want to replace.
    Best Regards,
    Jeremy

  • Web Gallery - need file name on thumbnail and large image

    I think I have tried all the CS3 templates, but haven't found what our client is requesting. Is there a template that shows the file name on both the thumbnail and large image, and will make large images of 600-800 pixels (long dimension)?
    Thanks in advance,
    Dan Clark

    Thanks for your reply Nini. Yes, I had gone through all the presets and options. Was hoping I might have missed something, or that someone knew a trick/workaround. We've been using Table-Minimal for years, which is my overall favorite. I like to ability to make a large image, but it can't do what the client is requesting. They've made a list of selects from some very large galleries (200-300 shots each), and now want to jump directly to the shots they've previously chosen, in order to show their coworkers. I've also considered "Add Numeric Links", but I find that either confuses people, or they give me that number, instead of the file name/number, which makes a lot of extra work for us.

  • Difference between text size in General setting and  large type in Accessibility settings

    Difference between text size in General settings and large type in Accessibility settings?

    None that I can see.

  • DbKona and WLS 8.1

    Hi,
    dbKona does not longer exists in WLS81
    Is it possible to download this as an standalone package ?
    TIA,
    Borre

    "Børre Nordbakken" wrote:
    Hi,
    dbKona does not longer exists in WLS81
    Is it possible to download this as an standalone package ?
    Hi.
    You can extract and use dbkona independently or with any version of weblogic.
    The dbkona classes are not dependent on any other package. BEA has deprecated
    dbKona for 8.1. I am lobbying management to provide dbKona and it's source to the
    dev2dev site.
    Joe
    >
    TIA,
    Borre

  • Picking through strategy M (small and large quantities)

    Hi SAP Gurus
    we are using picking strategy as M (small and large quantities). In the materials, we have have defined control quantity as 30 cs. So when a transfer order is created for more than 30 cs, system picks from 001 storage type and if TO quantity is less than 30 cs, system picks from 002 storage type. This is working fine.My picking sequence is: 001 then 002. But when there is no stock in 001, system doesnt suggest to go and pick from 002 automatically.
    we would like system to go to storage type 002 and pick from there if there isn't any stock left in 001. I need this urgently, how should I make this work.
    Regards
    Madhu.

    Hi Sandeep,
                   Check the Storage type search seq sequence in storage type search seq . first storage should be the storage type where u are going to store smallest quantity; for the second storage type in the sequence you enter the storage type for the medium size quantity, and so on..........
                In your scenario for strategy-m the seq has to be smaller stype to higher. so it should be 002 and then 001. so the system pick up all small qty from 002 and then moves to 002. any alteration you need probably need to use user exit.
    Best Regards
    Madhu

  • Will the 1tb and larger notebook drives work in my X60?

    I am out of space on my hard disk and see now that notebook drives are available in 1tb and larger sizes. Someone told me that these drives, while still called 2.5" drives, are actually thicker than the 640gb and smaller notebook drives, and may not fit my computer.
    Will one of the 1tb+ drives work in my X60?

    you would need a 2.5 inch sata drive with 9.5 mm thickness, the max capacity you can get in that profile is 750 gigs. The 1 tb and anything larger are all 12.5 mm thick drive, and won't fit.
    Regards,
    Jin Li
    May this year, be the year of 'DO'!
    I am a volunteer, and not a paid staff of Lenovo or Microsoft

  • HT3702 There was a problem amounts will be deducted from my account since January repeatedly and large amounts please explain and return amounts if taken unjustly

    There was a problem amounts will be deducted from my account since January repeatedly and large amounts please explain and return amounts if taken unjustly

    This is a User to User Forum.
    To Contact iTunes Customer Service and request assistance
    Use this Link  >  Apple  Support  iTunes Store  Contact

  • Best way to return large resultsets

    Hi everyone,
    I have a servlet that searches a (large) database by complex queries and sends the results to an applet. Since the database is quite large and the queries can be quite general, it is entirely possible that a particular query can generate a million rows.
    My question is, how do I approach this problem from a design standpoint? For instance, should I send the query without limits and get all the results (possibly a million) back? Or should I get only a few rows, say 50,000, at a time by using the SQL limit construct or some other method? Or should I use some totally different approach?
    The reason I am asking this question is that I have never had to deal with such large results and the expertise on this group will help me avoid some of the design pitfalls at the very outset. Of course, there is the question of whether the servlet should send so may results at once to the applet, but thats probably for another forum.
    Thanks in advance,
    Alan

    If you are using a one of the premiere databases (Oracle, SQL Server, Informix) I am fairly confident that it would be best to allow the database to manage both the efficiency of the query, and the efficiency of the transport.
    QUERY EFFICIENCY
    Query efficiences in all databases are optimized by the DBMS to a general algorithm. That means there are assumptions made by the DBMS as to the 'acceptable' number of rows to process, the number of tables to join, the number of rows that will be returned, etc. These general algorithms do an excellent job on 95+% of queries run against database. However, f you fall outside the bounds of these general algorithms, you will run into escalating performance problems. Luckily, SQL syntax provides enourmous flexibility in how to get your data from the database, and you can code the SQL to 'help' the database do a better job when SQL performance becomes a problem. On the extreme, it is possible that you will issue a query that overwhelms the database, and the physical resources available to the database (memory, CPU, I/O channels, etc). Sometimes this can happen even when a ResultSet returns only a single row. In the case of a single row returned, it is the intermediate processing (table joins, sorts, etc) that overwhelms the resources. You can help manage the memory resource issue by purchasing more memory (obviously), or re-code the SQL to a more apply a more efficent algorithm (make the optimizer do a better job), or you may as a last resort, have to break the SQL up into seperate SQL statements, using more granual approach (this is your "where id < 1000"). BTW: If you do have to use this approach, in most casees using the BETWEEN is often more efficient.
    TRANSPORT
    Most if not all of the JDBC drivers return the ResultSet data in 'blocks' of rows, that are delivered on an as needed basis to your program. Some databases alllow you to specify the size of these 'blocks' to aid in the optimization of your batch style processes. Assuming that this is true for your JDBC driver, you cannot manage it better than the JDBC driver implementation, so you should not try. In all cases, you should allow the database to handle as much of the data manipulation and transport logic as possible. They have 1000's of programmers working overtime to optimzie that code. They just have you out numbered, and while it's possible that you can code an efficiency, it's possible that you will be unable to take advantage of future efficiencies within the database due to your proprietary efficiencies.
    You have some interesting, and important decisions to make. I'm not sure how much control of the architecture is available, but you may want to consider alternatives to moving these large amounts of data around through the JDBC architecture. Is it possible to store this information on the server, and have it fetched using FTP or some other simple transport? Far less CPU usage, and more efficient use of your bandwith.
    So in case it wasn't clear, no, I don't think you should break up the SQL initially. If it were me, I would probably spend the time in putting out some metic based information to allow you to better judge where you are having slow downs when or if any should occur. With something like this, I have seen I.T. spend hours and hours tuning SQL just to find out that the network was the problem (or vice versa). I would also go ahead and run the expected queries outside of application and determine what kind of problems there are before coding of the application is finished.
    Hey, this got a bit wordy, sorry. Hopefully there is something in here that can help you...Joel

  • Unable to fetch large resultset

    I got this error when i was trying to retreive records from a sql server table which cosists of around 2 lakhs of records
    com.microsoft.sqlserver.jdbc.SQLServerException: The system is out of memory. Use server side cursors for large result sets:Java heap space. Result set size:78,585,571. JVM total memory size:66,650,112.
    how can i fetch the above said records

    3 choices:
    1) make your java heap size big enough to hold the entire ResultSet. If you're using Sun's Java (almost everyone does), then you increase the maximum heap with the -Xmx parameter whent the JVM is started. It looks like you're using the default maximum, which is pretty small. See:
    http://java.sun.com/j2se/1.3/docs/tooldocs/solaris/java.html
    2) retrieve the ResultSet from the database incrementally, and process it incrementally. Statement.setFetchSize() suggests to the driver how much of the ResultSet to keep in memory (and to retrieve in one chunk from the database), but it's common for scrollable ResultSets to ignore this hint and try to keep everything in memory. A ForwardOnly ResultSet (the default) is more likely to work incrementally, but it depends totally on your driver.
    3) break the data into multiple queries....

  • Sharepoint Foundation 2010 and Large Media Files?

    I have already seen links giving instructions in how to raise the default 50MB upload limit to up to 2GB, but it doesn't seem like a good idea based on all the caveats and warnings about it.
    If we need to occasionally allow access to external SharePoint users to files of size much larger than 50MB (software application installation files, audio recordings and video recordings) despite most documents being much less than 50MB (Office documents)
    what is the best solution that does not involve using third party external services such as OneDrive, Azure or Dropbox because we must host all of our files on premises.
    The SharePoint server is Internet accessible, but it requires AD authentication to log in and access files.
    Some have recommended file server shares for the larger files, but the Internet users only have AD accounts that are used to access the SharePoint document libraries, but they do not have VPN that would be needed to access an internal file share.
    I have heard of FTP and SFTP, but the users need something more user-friendly, doesn't require anything applications than their browser and that will use their existing AD credentials and we need to have auditing of who is uploading and downloading files.
    Is there any other solution other than just raising the file limit to 1 or 2GB just for a handful of large files in a document library full of mostly sub 50MB files?

    I had a
    previous post about performance impacts on the upload/download of large content on SharePoint.
    Shredded storage has got little to do with this case, as it handles Office documents shared on SharePoint, being edited on Office client, and saving back only the differences, therefore, lightening up the throughput.
    These huge files are not to be edited, they're uploaded once.
    It's a shame to expose this SharePoint Farm on the extranet, just because of a handful of huge files.
    Doesn't the company have a webserver on the DMZ hosting a extranet portal ? Can't that extranet portal feature a page that discovers those files intended to be downloaded from the outside, and then, act as a reverse proxy to stream them out ?
    Or, it may be a chance to build a nice Portal site on SharePoint.

Maybe you are looking for

  • How to open a R3 Transaction screen in VC

    Hi There, Can anyone tell me how to model VC in such a way that I can open up a R3 transaction screen for the VC. I have done this in the course but a an reproduce the example with a other TXcode. i put this code in my inputfield url: "pcd!3aportal_c

  • My constructor method is bogged!

    Hi There, I've got some problems with constructor methods in my program, of course the constructors look fine to me, but then faulty code usually looks good to any programmer after enough hours! Here is some of my code, firstly the following snippet

  • I cannot click on certain links in emails. What can I do to access these links? Thanks

    # Question I cannot click on certain links in emails. What can I do to access these links? Thanks

  • How Delete output types

    Hi all, How to delete output types ( for example BA00 ) triggred in the output determination before saving the application. For example assuming if a delivery is created an output type BA00 is triggered automatically but i want this to delete this ou

  • Can't connect to i

    I just got my Iphone and purchased a tv show from the iTunes store. I did sync my purchase with the phone but I keep getting an error message stating "cannot connect to iTunes store" Any suggestions?