Loading Large Data Set Times Out In APEX

I am trying to load a large text file using APEX 3 I am using a 1g file which has about 50,000 rows. After about 5 minutes the browser times out. Any ideas nothing in the alert log so it is not database related. Here is the error in the Apache Log
mod_plsql: Long running URL [pls/apex/wwv_flow.accept] timed out

Steve,
The Apache process is timing out. Most likely this is set to 300 in your httpd.conf (the default).
You can extend it to more than 300, but if I were in your shoes and you had the proper access, I'd use some other means to load 1GB of data (e.g., SQL*Loader).
Joel

Similar Messages

  • Need to load large data set from Oracle table onto desktop using ODBC

    I don't have TOAD nor any other tool for querying the database.  I'm wondering how I can load a large data set from an Oracle table onto my desktop using Excel or Access or some other tool using ODBC or not using ODBC if that's possible.  I need results to be in a .csv file or something similar. Speed is what is important here.  I'm looking to load more than 1 million but less than 10 million records at once.   Thanks.

    hillelhalevi wrote:
    I don't have TOAD nor any other tool for querying the database.  I'm wondering how I can load a large data set from an Oracle table onto my desktop using Excel or Access or some other tool using ODBC or not using ODBC if that's possible.  I need results to be in a .csv file or something similar. Speed is what is important here.  I'm looking to load more than 1 million but less than 10 million records at once.   Thanks.
    Use Oracle's free Sql Developer
    http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html
    You can just issue a query like this
    SELECT /*csv*/ * FROM SCOTT.EMP
    Then just save the results to a file
    See this article by Jeff Smith for other options
    http://www.thatjeffsmith.com/archive/2012/05/formatting-query-results-to-csv-in-oracle-sql-developer/

  • 64-bit LabVIEW - still major problems with large data sets

    Hi Folks -
    I have LabVIEW 2009 64-bit version running on a Win7 64-bit OS with Intel Xeon dual quad core processor, 16 gbyte RAM.  With the release of this 64-bit version of LabVIEW, I expected to easily be able to handle x-ray computed tomography data sets in the 2 and 3-gbyte range in RAM since we now have access to all of the available RAM.  But I am having major problems - sluggish (and stoppage) operation of the program, inability to perform certain operations, etc.
    Here is how I store the 3-D data that consists of a series of images. I store each of my 2d images in a cluster, and then have the entire image series as an array of these clusters.  I then store this entire array of clusters in a queue which I regularly access using 'Preview Queue' and then operate on the image set, subsets of the images, or single images.
    Then enqueue:
    I remember talking to LabVIEW R&D years ago that this was a good way to do things because it allowed non-contiguous access to memory (versus contigous access that would be required if I stored my image series as 3-D array without the clusters) (R&D - this is what I remember, please correct if wrong).
    Because I am experiencing tremendous slowness in the program after these large data sets are loaded, and I think disk access as well to obtain memory beyond 16 gbytes, I am wondering if I need to use a different storage strategy that will allow seamless program operation while still using RAM storage (do not want to have to recall images from disk).
    I have other CT imaging programs that are running very well with these large data sets.
    This is a critical issue for me as I move forward with LabVIEW in this application.   I would like to work with LabVIEW R&D to solve this issue.  I am wondering if I should be thinking about establishing say, 10 queues, instead of 1, to address this.  It would mean a major program rewrite.
    Sincerely,
    Don

    First, I want to add that this strategy works reasonably well for data sets in the 600 - 700 mbyte range with the 64-bit LabVIEW. 
    With LabVIEW 32-bit, I00 - 200 mbyte sets were about the limit before I experienced problems.
    So I definitely noticed an improvement.
    I use the queuing strategy to move this large amount of data in RAM.   We could have used other means such a LV2 globals.  But the idea of clustering the 2-d array (image) and then having a series of those clustered arrays in an array (to see the final structure I showed in my diagram) versus using a 3-D array I believe even allowed me to get this far using RAM instead of recalling the images from disk.
    I am sure data copies are being made - yes, the memory is ballooning to 15 gbyte.  I probably need to have someone examine this code while I am explaining things to them live.  This is a very large application, and a significant amount of time would be required to simplify it, and that might not allow us to duplicate the problem.  In some of my applications, I use the in-place structure for indexing
    data out of arrays to minimize data copies.  I expect I might have to
    consider this strategy now here as well.  Just a thought.
    What I can do is send someone (in US) via large file transfer a 1.3 - 2.7 gbyte set of image data - and see how they would best advise on storing and extracting the images using RAM, how best to optimize the RAM usage, and not make data copies.  The operations that I apply on the images are irrelevant.  It is the storage, movement, and extractions that are causing the problems.  I can also show a screen shot(s) of how I extract the images (but I have major problems even before I get to that point),
    Can someone else comment on how data value references may help here, or how they have helped in one of their applications?  Would the use of this eliminate copies?   I currently have to wait for 64-bit version of the Advanced Signal Processing Toolkit for LabVIEW 2010 before I can move to LabVIEW 2010.
    Don

  • Large data sets and key terms

    Hello, I'm looking for some guidance on how BI can help me. I am a business analyst in a health solutions firm, but not proficient in SQL. However, I have to work with large data sets that just exceed the capabilities of Excel.
    Basically, I'm having to use Excel to manaully search for key terms and apply a values to those results. For instance, I have a medical claims file, with Provider Names, Tax ID, Charges, etc. It's 300,000 records long and 15-25 columsn wide. I need to search for key terms in the provider name like Ambulance, Fire Dept, Rescue, EMT, EMS, etc. Anything that resembles an ambulance service. Also, need to include abbreviations of them such as AMB, FD, or variations like EMT, E M T, EMS, E M S, etc. Each time I do a search, I have filter and apply an "N/A" flag.
    That's just one key term. I also have things like Dentists or DDS, Vision, Optomemtry and a dozen other Provider Types that need to be flagged as "N/A".
    Is this something that can be handled using BI? I have access to a BI group, but I need to understand more about the capabilities of what can be done. As an analyst, I'm having to deal with poor data inegrity. So, just cleaning up the file can be extremely taxing and cumbersome.
    Some insight would be very helpful. Thanks.

    I am not sure if you are looking for an explanation about different BI products? If so, may be this forum is not the place to get a straight answer.
    But, Information Discovery product suite might be useful in your case. Regarding the "large date set" you mentioned, searching and analyzing 300,000 records may not be considered a large data set at least in Endeca standards :).
    All your other requests, could also be very easily implemented using Endeca's product suite. Please reach out to Oracle's Endeca product team and they might guide you on how this product suite would help you.

  • Moving large data real-time

    Anyone having an idea for moving a large data set from one core's thread to another's thread, in a Real Time system, where it will be put into a DAQ's output buffer? I have an array ,which at the max is 125625 X 23 X I16, which is built column (the 23 dimension) by column (actually it is done in a replace array element mode), that is then is too be "exported" to the other core's thread. There it is to be loaded into the DAQ cards' memory, to be output. I need to be able to do this in a pretty timely manner. When I try to use a Function Global Variable, that is "loaded" one column at a time, it is _slow_ ! Need a faster method!
    Thanks,
    Putnam
    Certified LabVIEW Developer
    Senior Test Engineer
    Currently using LV 6.1-LabVIEW 2012, RT8.5
    LabVIEW Champion

    I haven't tried queues for core to core communications before, hadn't considered them, but since some recent revelations have shown that RT lelvel determinism isn't a critical issue I guess that I can examine them as well. Not that some of them necessarily have  jitter issues, but I was being conservative. Will put together a trial test case. Thanks, as usual, for your help Ben. Hope the weather in your neck of PA wasn't too bad. Up in Syracuse the are 400% more snow than this time last year, and last year the season had 140+" (norm is closer to 115), and that was in a very truncated winter, not much until third week of January.
    Hmm, three stars. Wonder what I said that annoyed someone, or underwhelmed them?
    Even weirder is that at the LabVIEW forum level it is showing 4 stars, two voters.  Hmmm, too early for me, the coffee isn't hitting (actually not able to drink coffee lately)
    Message Edited by LV_Pro on 12-17-2007 09:15 AM
    Message Edited by LV_Pro on 12-17-2007 09:19 AM
    Putnam
    Certified LabVIEW Developer
    Senior Test Engineer
    Currently using LV 6.1-LabVIEW 2012, RT8.5
    LabVIEW Champion

  • How to handle large data sets?

    Hello All,
    I am working on a editable form document. It is using a flowing subform with a table. The table may contain up to 50k rows and the generated pdf may even take up to 2-4 Gigs of memory, in some cases adobe reader fails and "gives up" opening these large data sets.
    Any suggestions? 

    On 25.04.2012 01:10, Alan McMorran wrote:
    > How large are you talking about? I've found QVTo scales pretty well as
    > the dataset size increases but we're using at most maybe 3-4 million
    > objects as the input and maybe 1-2 million on the output. They can be
    > pretty complex models though so we're seeing 8GB heap spaces in some
    > cases to accomodate the full transformation process.
    Ok, that is good to know. We will be working in roughly the same order
    of magnitude. The final application will run on a well equipped server,
    unfortunately my development machine is not as powerful so I can't
    really test that.
    > The big challenges we've had to overcome is that our model is
    > essentially flat with no containment in it so there are parts of the
    We have a very hierarchical model. I still wonder to what extent EMF and
    QVTo at least try to let go of objects which are not needed anymore and
    allow them to be garbage collected?
    > Is the GC overhead limit not tied to the heap space limits of the JVM?
    Apparently not, quoting
    http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html:
    "The concurrent collector will throw an OutOfMemoryError if too much
    time is being spent in garbage collection: if more than 98% of the total
    time is spent in garbage collection and less than 2% of the heap is
    recovered, an OutOfMemoryError will be thrown. This feature is designed
    to prevent applications from running for an extended period of time
    while making little or no progress because the heap is too small. If
    necessary, this feature can be disabled by adding the option
    -XX:-UseGCOverheadLimit to the command line."
    I will experiment a little bit with different GC's, namely the parallel GC.
    Regards
    Marius

  • XML Solutions for Large Data Sets

    Hi,
    I'm working with a large data set (9 million records comprising 36 gigabytes) and am exploring the use of XML with it.
    I've experimented with a JDBC app (taken straight from Steve Muench's excellent <i>Oracle_XML_Applications</i>) for writing to CLOBS, but achieve throughputs of much less than 40k/s (the minimum speed required to process the data in < 10 days).
    What kind of throughputs are possible loading XML records from CLOBs into multiple tables (using server-side Java apps)?
    Could anyone comment whether XML is a feasible possibility for this size data set?
    Regards,
    Mike

    Just would like to identify myself (I'm the submitter):
    Michael Driscoll <[email protected]>.
    null

  • Working with Large data sets Waveforms

    When collection data at a high rate ( 30K ) and for a long period (120 seconds) I'm unable rearrange the data due to memory errors, is there a more efficient method?
    Attachments:
    Convert2Dto1D.vi ‏36 KB

    Some suggestions:
    Preallocate your final data before you start your calculations.  The build array you have in your loop will tend to fragment memory, giving you issues.
    Use the In Place Element to get data to/from your waveforms.  You can use it to get single waveforms from your 2D array and Y data from a waveform.
    Do not use the Transpose and autoindex.  It is adding a copy of data.
    Use the Array palette functions (e.g. Reshape Array) to change sizes of current data in place (if possible).
    You may want to read Managing Large Data Sets in LabVIEW.
    Your initial post is missing some information.  How many channels are you acquiring and what is the bit depth of each channel?  30kHz is a relatively slow acquisition rate for a single channel (NI sells instruments which acquire at 2GHz).  120s of data from said single channel is modestly large, but not huge.  If you have 100 channels, things change.  If you are acquiring them at 32-bit resolution, things change (although not as much).  Please post these parameters and we can help more.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • Safari keeps crashing when loading large data on web pages

    I have owned my iPad 3 for more than a year now and I have never encountered this problem but ever since I have updated to the latest iOS version(7.1.1), my safari keeps crashing on my community site where threads have maybe over a 100 comments in them.
    Now whenever I open a thread page on Safari, it loads for a few seconds then crashes, to make things worse for me is one in a while when it crashes, it reboots my iPad. I have seen several similar questions asked by other people regarding safari crashing when loading large amounts of data and got help saying that there's something wrong with the CSS. or something but I never got any idea on how to resolve this bug.
    Can anyone tell me how do I prevent my Safari from crashing everytime I open a thread? Any solutions? Because I am getting really ******.
    (Ps: I have already tried other browsing apps such as Google Search and Chrome, they still crash. I even tried letting it load half-way and then stopping it but it isn't going well.)

    I have restored my iOS devices a number of times in order to resolve issues and it has gone very smoothly every time. I am not going to lie, it takes a fair amount of time to backup, restore the iOS and then restore from the backup, but it could very well resolve the issue.
    On the other hand, it may not help at all. Restoring the software is a standard troubleshooting measure And that is why it is recommended when other suggestions aren't working. But before you restore, there are a couple of other things that you could try,
    Reset all settings. Settings>General>Reset>Reset all Settings. You will not lose any data when you do this, but it does take some time to enter all of the device settings again, so be aware of that.
    Another thing to try is to erase the device and start over. This is different than restoring to factory settings. Reads rhis for more information.
    iOS: How to back up your data and set up your device as a new device

  • SSAS Date Parameter times out

    Environment: We are running SQL Server 2012 Standard Edition with Analysis Services.  Reporting Services on a separate machine. Visual Studio 2010. 
    I have a Date dimension with a generated table named Time.  To the table I have added custom columns to show Week_Ending and Short_Date_Alpha that are used in Excel Reports from the Analysis Services Cubes.
    When I attempt to use this dimension in Reporting Services, it times out if I try to use it as a parameter.  The report will run for 30 minutes, never surface the parameter selection box and then eventually time-out with an unknown error related to
    the time dimension.  This is in Visual Studio. 
    If I pull the dimension directly from the SQL table I can produce a report on it.  If I run the full query for the measures with the default date selected it runs in seconds in the query builder.    Here are 10 rows of the relevant
    table structure:
    PK_Date Date_Name Reporting_Year Reporting_Year_Name Reporting_Month Reporting_Month_Name Reporting_Week Reporting_Week_Name Reporting_Day Reporting_Day_Name Short_Date_Alpha End_of_Week_Name Week_Ending WorkDay
    2008-01-01 00:00:00.000 Tuesday, January 01 2008 2007-01-07 00:00:00.000 2007 2007-12-02 00:00:00.000 Rpt Dec 2007 2007-12-30 00:00:00.000 Reporting Week 52, 2007 2008-01-01 00:00:00.000 Tuesday, January 01 2008 Jan 
    1 2008  WE 01/07/08 2008-01-07 00:00:00.000 1
    2008-01-02 00:00:00.000 Wednesday, January 02 2008 2007-01-07 00:00:00.000 2007 2007-12-02 00:00:00.000 Rpt Dec 2007 2007-12-30 00:00:00.000 Reporting Week 52, 2007 2008-01-02 00:00:00.000 Wednesday, January 02 2008 Jan 
    2 2008  WE 01/07/08 2008-01-07 00:00:00.000 1
    2008-01-03 00:00:00.000 Thursday, January 03 2008 2007-01-07 00:00:00.000 2007 2007-12-02 00:00:00.000 Rpt Dec 2007 2007-12-30 00:00:00.000 Reporting Week 52, 2007 2008-01-03 00:00:00.000 Thursday, January 03 2008 Jan 
    3 2008  WE 01/07/08 2008-01-07 00:00:00.000 1
    2008-01-04 00:00:00.000 Friday, January 04 2008 2007-01-07 00:00:00.000 2007 2007-12-02 00:00:00.000 Rpt Dec 2007 2007-12-30 00:00:00.000 Reporting Week 52, 2007 2008-01-04 00:00:00.000 Friday, January 04 2008 Jan 
    4 2008  WE 01/07/08 2008-01-07 00:00:00.000 1
    2008-01-10 00:00:00.000 Thursday, January 10 2008 2008-01-06 00:00:00.000 2008 2008-01-06 00:00:00.000 Rpt Jan 2008 2008-01-06 00:00:00.000 Reporting Week 1, 2008 2008-01-10 00:00:00.000 Thursday, January 10 2008 Jan
    10 2008  WE 01/12/08 2008-01-12 00:00:00.000 1
    2008-01-11 00:00:00.000 Friday, January 11 2008 2008-01-06 00:00:00.000 2008 2008-01-06 00:00:00.000 Rpt Jan 2008 2008-01-06 00:00:00.000 Reporting Week 1, 2008 2008-01-11 00:00:00.000 Friday, January 11 2008 Jan
    11 2008  WE 01/12/08 2008-01-12 00:00:00.000 1
    2008-01-12 00:00:00.000 Saturday, January 12 2008 2008-01-06 00:00:00.000 2008 2008-01-06 00:00:00.000 Rpt Jan 2008 2008-01-06 00:00:00.000 Reporting Week 1, 2008 2008-01-12 00:00:00.000 Saturday, January 12 2008 Jan
    12 2008  WE 01/12/08 2008-01-12 00:00:00.000 0
    2008-01-13 00:00:00.000 Sunday, January 13 2008 2008-01-06 00:00:00.000 2008 2008-01-06 00:00:00.000 Rpt Jan 2008 2008-01-13 00:00:00.000 Reporting Week 2, 2008 2008-01-13 00:00:00.000 Sunday, January 13 2008 Jan
    13 2008  WE 01/19/08 2008-01-19 00:00:00.000 0
    2008-01-19 00:00:00.000 Saturday, January 19 2008 2008-01-06 00:00:00.000 2008 2008-01-06 00:00:00.000 Rpt Jan 2008 2008-01-13 00:00:00.000 Reporting Week 2, 2008 2008-01-19 00:00:00.000 Saturday, January 19 2008 Jan
    19 2008  WE 01/19/08 2008-01-19 00:00:00.000 0
    2008-01-20 00:00:00.000 Sunday, January 20 2008 2008-01-06 00:00:00.000 2008 2008-01-06 00:00:00.000 Rpt Jan 2008 2008-01-20 00:00:00.000 Reporting Week 3, 2008 2008-01-20 00:00:00.000 Sunday, January 20 2008 Jan
    20 2008  WE 01/26/08 2008-01-26 00:00:00.000 0
    The dimension looks fine when I browse it.   It produces the correct information in the Excel Reports, but I cannot get it to work in Reporting Services.  
    I have deleted dates out of the cubes and processed, then added the Time dimension back in and processed and it still won't work.   I have a cube running off a table called Date and that seems to work. 
    Is the issue with the table name?  With the custom descriptions?   Something else? 

    How would you best suggest limiting the time frame?   Add a filter to the query so that the future dates are not part of the selection?   I'm doing some data refresh in the data mart so won't be able to try limiting the selection
    for a few hours. 
    Hi Diane,
    Could you please let us know how many available values for your date parameter(the quantity of date dimension in SSAS)? One workaround that we can try to add an additional text parameter which will prefilter the available values in the large parameter list.
    Here is similar thread about this topic for your reference, please see:
    http://dataqueen.unlimitedviz.com/2012/02/filter-a-parameter-with-long-list-of-values-using-type-ahead/
    Regards,
    Elvis Long
    TechNet Community Support

  • Just in case any one needs a Observable Collection that deals with large data sets, and supports FULL EDITING...

    the VirtualizingObservableCollection does the following:
    Implements the same interfaces and methods as ObservableCollection<T> so you can use it anywhere you’d use an ObservableCollection<T> – no need to change any of your existing controls.
    Supports true multi-user read/write without resets (maximizing performance for large-scale concurrency scenarios).
    Manages memory on its own so it never runs out of memory, no matter how large the data set is (especially important for mobile devices).
    Natively works asynchronously – great for slow network connections and occasionally-connected models.
    Works great out of the box, but is flexible and extendable enough to customize for your needs.
    Has a data access performance curve so good it’s just as fast as the regular ObservableCollection – the cost of using it is negligible.
    Works in any .NET project because it’s implemented in a Portable Code Library (PCL).
    The latest package can be found on nugget. Install-Package VirtualizingObservableCollection. The source is on github. 

    Good job, thank you for sharing
    Best Regards,
    Please remember to mark the replies as answers if they help

  • XY graphs under-perf​orm on large data sets

    If for example you have 3 signals with 8 million points each and you plot these on a regular waveform graph, the user interface is able to display the data smoothly. All graph palette operations (zoom, scroll etc.) respond in "real-time".
    Put the same 3x8 million points on an XY graph, and you have one sluuuuggish user interface. Scrolling is for example no longer possible in any practical fashion.
    I'm sure a lot of it has to do with the overhead of having all those X-values (often unnecessarily many - as discussed in this idea), but the performance degradation compared to a regular waveform graph (even if the latter is fed twice the amount of Y values for example) is severe.
    Are there ways around this performance issue? Sure. We can e.g. write code that decimates the data we send to the indicator, and refills it when the user zooms or scrolls and therefore needs additional data points. But this requires lots of code, and can never become as transparent/integrated and smooth as an implementation within the indicator itself. 
    And competing products are already there, that's what bugs me right now. I've got colleagues that get such functionality "for free" with the graphing tools they have.
    So, we're about to develop an XControl that makes it possible to present such large non-continuous data sets in a smooth manner. (Ironically one solution is to add data points so that I have continuous data - and then use the regular graph...) But has anyone already done this? Andhowfaroffisa nativeXYgraphindicatorthatmakes such code obsolete?
    MTO

    Have a look to the Topic "Lost reference of main controller within popup" Lost reference of main controller within popup
    "I hate windows popups" and MVC too.
    In newest versions there is a nice popup managed via DHTML (like Web Dynpro does) but basically you should have a common reference to the data somewhere. You can use server side cookies, attributes of your application class, public and static attributes of a specific controller....
    Sergio

  • Set time out for single webservice in NWDS 2004s

    hai,
      i created webservices for session bean.created webservices are consumed by webdynpro client.
    when the webdynpro client consuming it.if the response takes more than 60 sec the webservices are timed out so i want to set the time out for my webservice.
    how can i set the time out (more 60 sec)for my webserive(for one service)?
    thanks in advance.
    Edited by: lakshman balanagu on Jun 24, 2008 11:20 AM

    Hi
    Try out this thing: I am not sure as I have not done this.
    Go to this link: http://<server>:<j2ee port>/nwa --> System Management --> Overview --> Configuration
    Application Resouce ( Select your resource from the list)
    Check for "Connection Pooling".
    Here you can check different options.

  • How to set time out in the abap program?

    Hi,
    I want to set the execution time in the IN_UPDATE block of a BADI, could you tell me how to code that?
    When the specified execution time reaches, and the badi is still running, then pop up a window and tell the end user the time out message.
    Does SAP have any function regarding this? Thanks in advance.

    You can have the INPUT be a number of seconds, then calculate the max endtime = SY-UZEIT + INPUT. Have checks throughout the program that returns an error as OUTPUT if exittime >= SY-UZEIT(current time).
    There may be an easier way but this is all I can think of as a user set timeout option.

  • Setting time outs for a secure connection

    I am researching on setting up a secure application incorporating two authentication
    mechanisms:
    a) Mutual Authentication to verify certificates
    b) RDBMS Security Realm to validate username/password combinations
    I desire a time out to take place after an hour of dormant time passes on the
    connection forcing the user to re-authenticate. Any suggestions on how to configure
    such behavior? I have been digging around the documentation for a while now and
    I have not found anything specific controlling this time out duration.
    Thanks for the help.
    -cb

    http://www.bulletsandbones.com/GB/GBFAQ.html#exportexactlength

Maybe you are looking for

  • Automatic PO Creation thru assign PR to source of supply (me57)

    Hello, I'd like to have an automatically created PO out of my PR thru ME57. I assign the PR to a source. and press the nice button for generate PO. But then it doesn't generate the PO, it just shows me the me21n with the PR in the document overview i

  • Why is my hard drive not performing at the maximum Sata II speed?  MBP mid-2010 (macbook pro 7,1)

    I have a question pertaining to the factory hard drive in this Macbook pro mid-2010 that i have.  Currently my system profiler is showing that I am using only 1.5 GB/s of the maximum 3.0 GB/s allowed in the SATA II controller.  below are a couple of

  • Material cost

    Hi All, Can somebody help me to calculate raw material cost of material on weekly basis. & with steps how the  system will calculate the matl. cost. Thanks in advance Vaibhav

  • User Defined Default Starting Page

    Hopefully an easy question...what table holds the user defined default starting page? Specifically in 11g. Thanks!

  • Solaris 11 livecd-installer cannot find harddisk

    Booting the live-cd on a IBM Thinkcentre A-51 workstation ( ubuntu 10.10 and ubuntu 8.04 installed ) to check out Solaris 11, i ran de installer as intended. But where Gparted could see/ find my partitions, the installer did not and kept running fore