Loaing huge amount of records every secons
Hi
11g Version.
I have a source file locating in unix file system.
Every SECOND about 5,000 records are APPENDED to this logfile.
I need to load those new records , as fast as possible to real time, into the database.
The source of the file is written via RSYSLOG daemonm and it should be parsed and loaded
into the database as I mentioned ( near online).
What is the best practice to load the data from the logfile into the database ?
Since the recorded are APPENDED it seems that it cant be external tables, beacuse it
should "remember where it left off" and seek around in the file.
Thanks
You do not specify what type of format the data within the file happens to be in which might have some bearing on the best solution.
If significant parsing of the input data is required then I think this is a job for either a pro*c program or a java program.
How do you handle purge or re-allocation of the input file if you are appending to constantly. Eventually it will fill the disk. Do you start a new file every day at midnight or what?
How is the data appended to the log file? What process does this? It would seem that modifying this process might be a consideration.
HTH -- Mark D Powell --
Similar Messages
-
Try to shrink datafiles after deleting huge amount of records
Dear Sir;
I have deleted more than 50 million records along with table’s partitions for the previous year.
now I'm trying to shrink datafile with more than 10GB free space but still unable to delete due to
ORA-03297: file contains used data beyond requested RESIZE value.
How can we shrink these datafile or otherwise what is the best way to delete huge amount of records and get additional space in HD
Thanks and best regards
Ali LabadiHi,
You could see this article of Jonathan LEWIS:
http://jonathanlewis.wordpress.com/2010/02/06/shrink-tablespace -
Need help in designing fetching of huge amount of records from a DB
Hello,
I am having a starard application which fetches data from MSSQL Server using this standard code:
CallableStatement cs;
Connection connection;
cs = connection.prepareCall("{call " + PROCREQUEST_SP + "(?,?,?)}");
cs.execute();
ResultSet rs = cs.getResultSet();
while (rs.next())
Most of the queries the users run are no more than 10,000 records and the code works OK like this.
But, the Database contains more the 7 million records and I would like to enable the user to see all these records if he wants to.
If I enable it at the current code I finally receive java.lang.OutOfMemory Exception. How can I improve my code in order to do this kind of task?Hello Roy,
Yes the DB is connected again and only the data chunk asked for is called. eg. if user is on page 2, containing recoreds 1000-2000, and clicks on 3 only records from 2000-3000 should be fetched from the DB. This logic should be taken care by your query.
This saves a lot of Memory,so you wouldn't face MemoryException.
Regards,
Rahul -
How to insert large amount of records at a time into oracle
Hi, im Dilip. I'm newbie to Oracle. For practicing purpose i got some SQL code which has huge amounts of records in text format which i need to copy+paste in my SQL Plus in Oracle 9i. But when i try to paste in SQL Plus I'm unable to paste more them 50 lines of code at a time. In one of the text file there is 80 thousand lines of record's code i need to paste. Please help me. Here is the link for the text file I'm using : http://www.mediafire.com/view/?4o9eo1qjd15kyib . Any kind of help will be much appreciated.
982089 wrote:
Hi, im Dilip. I'm newbie to Oracle. For practicing purpose i got some SQL code which has huge amounts of records in text format which i need to copy+paste in my SQL Plus in Oracle 9i. But when i try to paste in SQL Plus I'm unable to paste more them 50 lines of code at a time. In one of the text file there is 80 thousand lines of record's code i need to paste. Please help me. Here is the link for the text file I'm using : http://www.mediafire.com/view/?4o9eo1qjd15kyib . Any kind of help will be much appreciated.
sqlplus user1/pass1
@sql_text_file.sql
doing above will execute all the SQL statements in the text file -
Handling Huge Amount of data in Browser
Some information regarding large data handling in Web Browser. Browser data will be downloaded to the
cache of local machine. So when the browser needs data to be downloaded in terms of MBs and GBs, how can we
handle that ?
requirement is as mentioned below.
A performance monitoring application is collecting performance data of a system every 30 seconds.
The size of the data collected can be around 10 KB for each interval and it is logged to a database. If this application
runs for one day, the statistical data size will be around 30 MB (28.8 MB) . If it runs for one week, the data size will be
210 MB. There is no limitation on the number of days from the software perspective.
User needs to see this statistical data in the browser. We are not sure if this huge amount of data transfer to the
browser in one instance is feasible. The user should be able to get the overall picture of the logged data for a
particular period and if needed, should be able to drill down step by step to lesser ranges.
For e.g, if the user queries for data between the dates 10'th Nov to 20'th Nov, the user expects to get an overall idea of
the 11 days data. Note that it is not possible to show each 30 second data when showing 11 days data. So some logic
has to be applied to present the 11 days data in a reasonably acceptable form. Then the user can go and select a
particular date in the graph and the data for that day alone should be shown with a better granularity than the overall
graph.
Note: The applet may not be a signed applet.How do you download gigabytes of data to a browser? The answer is simple. You don't. A data analysis package like the one you describe should run on the server and send the requested summary views to the browser.
-
Changes to write optimized DSO containing huge amount of data
Hi Experts,
We have appended two new fields in DSO containg huge amount of data. (new IO are amount and currency)
We are able to make the changes in Development (with DSO containing data). But when we tried to
tranport the changes to our QA system, the transport hangs. The transport triggers a job which
filled-up the logs so we need to kill the job which aborts the transport.
Does anyone of you had the same experience. Do we need to empty the DSO so we can transport
successfully? We really don't want to empty the DSO's as it will take time to load?
Any help?
Thank you very muhc for your help.
Best regards,
Roseemptying the dso should not be necessary, not for a normal dso and not for a write optimized DSO.
What are the things in the logs; sort of conversions for all the records?
Marco -
Report in Excel format fails for huge amount of data with headers!!
Hi All,
I have developed an oracle report which fetches upto 5000 records.
The requirements is to fetch upto 100000 records.
This report fetches data if the headers are removed. If headers are given its not able to fetch the data.
Have anyone faced this issue??
Any idea to fetch huge amount of data by oracle report in excel format.
Thanks & Regards,
KP.Hi Manikant,
According to your description, the performance is slow when display huge amount of data with more than 3 measures into powerpivot, so you need the hardware requirements for build a PowerPivot to display huge amount of data with more than 3 measures, right?
PowerPivot benefits from multi-core processors, large memory and storage capacities, and a 64-bit operating system on the client computer.
Based on my experience, large memory, multiprocessor and even
solid state drives are benefit PowerPivot performance. Here is a blog about Memory Considerations about PowerPivot for Excel for you reference.
http://sqlblog.com/blogs/marco_russo/archive/2010/01/26/memory-considerations-about-powerpivot-for-excel.aspx
Besides, you can identify which query was taking the time by using the tracing, please refer to the link below.
http://blogs.msdn.com/b/jtarquino/archive/2013/12/27/troubleshooting-slow-queries-in-excel-powerpivot.aspx
Regards,
Charlie Liao
TechNet Community Support -
Time Limit exceeded Error while updating huge number of records in MARC
Hi experts,
I have a interface requirement in which third party system will send a big file say.. 3 to 4MB file into SAP. in proxy we
used BAPI BAPI_MATERIAL_SAVEDATA to save the material/plant data. Now, because of huge amount of data the SAP Queues are
getting blocked and causing the time limit exceeded issues. As the BAPI can update single material at time, it will be called as many materials
as we want to update.
Below is the part of code in my proxy
Call the BAPI update the safety stock Value.
CALL FUNCTION 'BAPI_MATERIAL_SAVEDATA'
EXPORTING
headdata = gs_headdata
CLIENTDATA =
CLIENTDATAX =
plantdata = gs_plantdata
plantdatax = gs_plantdatax
IMPORTING
return = ls_return.
IF ls_return-type <> 'S'.
CALL FUNCTION 'BAPI_TRANSACTION_ROLLBACK'.
MOVE ls_return-message TO lv_message.
Populate the error table and process next record.
CALL METHOD me->populate_error
EXPORTING
message = lv_message.
CONTINUE.
ENDIF.
Can any one please let me know what could be the best possible approach for this issue.
Thanks in Advance,
Jitender
Hi experts,
I have a interface requirement in which third party system will send a big file say.. 3 to 4MB file into SAP. in proxy we
used BAPI BAPI_MATERIAL_SAVEDATA to save the material/plant data. Now, because of huge amount of data the SAP Queues are
getting blocked and causing the time limit exceeded issues. As the BAPI can update single material at time, it will be called as many materials
as we want to update.
Below is the part of code in my proxy
Call the BAPI update the safety stock Value.
CALL FUNCTION 'BAPI_MATERIAL_SAVEDATA'
EXPORTING
headdata = gs_headdata
CLIENTDATA =
CLIENTDATAX =
plantdata = gs_plantdata
plantdatax = gs_plantdatax
IMPORTING
return = ls_return.
IF ls_return-type <> 'S'.
CALL FUNCTION 'BAPI_TRANSACTION_ROLLBACK'.
MOVE ls_return-message TO lv_message.
Populate the error table and process next record.
CALL METHOD me->populate_error
EXPORTING
message = lv_message.
CONTINUE.
ENDIF.
Can any one please let me know what could be the best possible approach for this issue.
Thanks in Advance,
JitenderHi Raju,
Use the following routine to get fiscal year/period using calday.
*Data definition:
DATA: l_Arg1 TYPE RSFISCPER ,
l_Arg2 TYPE RSFO_DATE ,
l_Arg3 TYPE T009B-PERIV .
*Calculation:
l_Arg2 = TRAN_STRUCTURE-POST_DATE. (<b> This is the date that u have to give</b>)
l_Arg3 = 'V3'.
CALL METHOD CL_RSAR_FUNCTION=>DATE_FISCPER(
EXPORTING I_DATE = l_Arg2
I_PER = l_Arg3
IMPORTING E_FISCPER = l_Arg1 ).
RESULT = l_Arg1 .
Hope it will sove ur problem....!
Please Assign points.......
Best Regards,
SG -
What is the best practice of deleting large amount of records?
hi,
I need your suggestions on best practice of deleting large amount of records of SQL Azure regularly.
Scenario:
I have a SQL Azure database (P1) to which I insert data every day, to prevent the database size grow too fast, I need a way to remove all the records which is older than 3 days every day.
For on-premise SQL server, I can use SQL Server Agent/job, but, since SQL Azure does not support SQL Job yet, I have to use a Web job which scheduled to run every day to delete all old records.
To prevent the table locking when deleting too large amount of records, in my automation or web job code, I limit the amount of deleted records to
5000 and batch delete count to 1000 each time when calling the deleting records stored procedure:
1. Get total amount of old records (older then 3 days)
2. Get the total iterations: iteration = (total count/5000)
3. Call SP in a loop:
for(int i=0;i<iterations;i++)
Exec PurgeRecords @BatchCount=1000, @MaxCount=5000
And the stored procedure is something like this:
BEGIN
INSERT INTO @table
SELECT TOP (@MaxCount) [RecordId] FROM [MyTable] WHERE [CreateTime] < DATEADD(DAY, -3, GETDATE())
END
DECLARE @RowsDeleted INTEGER
SET @RowsDeleted = 1
WHILE(@RowsDeleted > 0)
BEGIN
WAITFOR DELAY '00:00:01'
DELETE TOP (@BatchCount) FROM [MyTable] WHERE [RecordId] IN (SELECT [RecordId] FROM @table)
SET @RowsDeleted = @@ROWCOUNT
END
It basically works, but the performance is not good. One example is, it took around 11 hours to delete around 1.7 million records, really too long time...
Following is the web job log for deleting around 1.7 million records:
[01/12/2015 16:06:19 > 2f578e: INFO] Start getting the total counts which is older than 3 days
[01/12/2015 16:06:25 > 2f578e: INFO] End getting the total counts to be deleted, total count:
1721586
[01/12/2015 16:06:25 > 2f578e: INFO] Max delete count per iteration: 5000, Batch delete count
1000, Total iterations: 345
[01/12/2015 16:06:25 > 2f578e: INFO] Start deleting in iteration 1
[01/12/2015 16:09:50 > 2f578e: INFO] Successfully finished deleting in iteration 1. Elapsed time:
00:03:25.2410404
[01/12/2015 16:09:50 > 2f578e: INFO] Start deleting in iteration 2
[01/12/2015 16:13:07 > 2f578e: INFO] Successfully finished deleting in iteration 2. Elapsed time:
00:03:16.5033831
[01/12/2015 16:13:07 > 2f578e: INFO] Start deleting in iteration 3
[01/12/2015 16:16:41 > 2f578e: INFO] Successfully finished deleting in iteration 3. Elapsed time:
00:03:336439434
Per the log, SQL azure takes more than 3 mins to delete 5000 records in each iteration, and the total time is around
11 hours.
Any suggestion to improve the deleting records performance?This is one approach:
Assume:
1. There is an index on 'createtime'
2. Peak time insert (avgN) is N times more than average (avg). e.g. supposed if average per hour is 10,000 and peak time per hour is 5 times more, that gives 50,000. This doesn't have to be precise.
3. Desirable maximum record to be deleted per batch is 5,000, don't have to be exact.
Steps:
1. Find count of records more than 3 days old (TotalN), say 1,000,000.
2. Divide TotalN (1,000,000) with 5,000 gives the number of deleted batches (200) if insert is very even. But since it is not even and maximum inserts can be 5 times more per period, set number of deleted batches should be 200 * 5 = 1,000.
3. Divide 3 days (4,320 minutes) with 1,000 gives 4.32 minutes.
4. Create a delete statement and a loop that deletes record with creation day < today - (3 days ago - 3.32 * I minutes). (I is the number of iterations from 1 to 1,000)
In this way the number of records deleted in each batch is not even and not known but should mostly within 5,000 and even you run a lot more batches but each batch will be very fast.
Frank -
Performance issue fetching huge number of record with "FOR ALL ENTRIES"
Hello,
We need to extract an huge amount of data (about 1.000.000 records) from VBEP table, which overall dimension is about 120 milions records.
We actually use this statements:
CHECK NOT ( it_massive_vbep[] IS INITIAL ) .
SELECT (list of fields) FROM vbep JOIN vbap
ON vbepvbeln = vbapvbeln AND
vbepposnr = vbapposnr
INTO CORRESPONDING FIELDS OF w_sched
FOR ALL ENTRIES IN it_massive_vbep
WHERE vbep~vbeln = it_massive_vbep-tabkey-vbeln
AND vbep~posnr = it_massive_vbep-tabkey-posnr
AND vbep~etenr = it_massive_vbep-tabkey-etenr.
notice that internal table it_massive_vbep contains always records with fully specified key.
Do you think this query could be further optimized?
many thanks,
-Enricothe are 2 option to improve performance:
+ you should work in blocks of 10.000 to 50.000
+ you should check archiving options, does this really make sense
> VBEP table, which overall dimension is about 120 milions records.
it_massive_vbep into it_vbep_notsomassive (it_vbep_2)
CHECK NOT ( it_vbep_2[] IS INITIAL ) .
get runtime field start.
SELECT (+list of fields+)
INTO CORRESPONDING FIELDS OF TABLE w_sched
FROM vbep JOIN vbap
ON vbep~vbeln = vbap~vbeln AND
vbep~posnr = vbap~posnr
FOR ALL ENTRIES IN it_vbep_2
WHERE vbep~vbeln = it_vbep_2-vbeln
AND vbep~posnr = it_vbep_2-posnr
AND vbep~etenr = it_vbep_2-etenr.
get runtime field stop.
t = stop - start.
write: / t.
Be aware that even 10.000 will take some time.
Other question, how did you get the 1.000.000 records in it_massive_vbep. They are not typed in, but somehow select.
Change the FAE into a JOIN and it will be much faster.
Siegfried -
I used migration assn't to load a Time Machine backup onto a new mac. The first TM backup after that took some time, perhaps not surprising. But the backups thereafter have all taken hours, with huge amounts of "indexing" time. Time to reload TM?
Does every backup require lots of indexing? If so, the index may be damaged.
Try Repairing the backups, per #A5 in Time Machine - Troubleshooting.
If that doesn't help, see the pink box in #D2 of the same link. -
Hi,
I have copied these problem on you site, as I could explain better then what they did, wich my problem remain the same as they have. And part me explaining what is really happen to my Laptop.
''Mozilla/Firefox 4 will not SHOW on my computer SCREEN.
Firefox won't load onto my computer. I have Windows 7, and each time I try to run the program to install it, it tells me I have another version still waiting to be installed that requires my computer to reboot. When I check "yes" to reboot, my computer reboots but still no Firefox. When I try to uninstall the the old Mozilla program from my Control Panels, there is no Mozilla files or programs listed not in my list of programs but when I go directly to the Windows File on the C: Drive, there's a Mozilla program file there. WHen I go into that file check, I attempt to open the uninstall program. I get an error message that says something to the effect of: "the version Mozilla on this computer is not compatable with my OS." Something about it being and 86bit program. My Firefox 3.6 worked fine 2 days ago. What's wrong??? ''
'''Firefox 4 using huge amounts of RAM on sites with Java scripts
I have been observing Firefox using huge amount of RAM when I am on sites that use Java Scripts to rotate images. I have tried a couple of different sites and have monitored the RAM usage. With Java script enabled Firefox 4 continues to grow its RAM usage by about 10MB a minute. I have had the usage hit as high as 1.5GB. As a comparison I have monitored the same sites in Internet explorer and have not seen the same issue. - Just to eliminate a site issue.
Turning off Java Script solves the issue and eventually frees the RAM.
I am using XP Pro, 3 GB RAM. '''''
My experince with Mozilla Firefox:
Here what really happen when I install the Mozilla Firebox on my La[top you can see the file is there but when I click the program to open its does not shown on the screen but when I check with WTM (Windows Task Manager) its shows that is ON and you can only witness the laptop going very slow and I have to reboot it, and I can not install any other Mozilla :( because of the same problem and all the above.
I really need Mozilla to work as soon as possible due to important deadlines I have pendent, and i can only use Chrome but is not my ideal program. Can you help me , please ?????
Thank you
Vitor MendesIf it opens in safe mode, you may have a problematic add-on. Try the procedure in the [[Troubleshooting extensions and themes]] article.
-
Amount of records loaded to dso is not same as in psa
i performed a loading from psa to dso. i have 2 datasource under this dso, the amount of records loaded from psa for this 2 datasources to dso is not consistent. the psa for the 1st datasource having 3k records and the 2nd datasource having 5k records, when i perform the loading for both of this datasource to dso, the records is less. do anyone here know why is this so?
hi,
DSO have overwrite option and hence you have lesser records.
chk if you have enough key fields in DSO, so that you can reduce the number of records getting overwritten.
Ramesh -
Hi Community,
since Friday my iPhone 4 startet using huge amounts of mobile data and rapidly wastes battery lifetime (50% in 3 hous!) It also gets quite warm even when I don't use it at all.
I suspect an app does this, because in flight mode battery is ok.
How do I find out which app is to blame without having to uninstall all apps?
Thanks for your help.
Kind regards
NymphenburgYou need to look into using the SQL*Loader utility:
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96652/part2.htm#436160
Also, Oracle supports BULK inserts using PL/SQL procedures:
http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/05_colls.htm#28178 -
Data Transfer Prozess (several data packages due two huge amount of data)
Hi,
a)
I`ve been uploading data from ERP via PSA, ODS and InfoCube.
Due to a huge amount of data in ERP - BI splits those data in two data packages.
When prozessing those data to ODS the system delete a few dataset.
This is not done in step "Filter" but in "Transformation".
General Question: How can this be?
b)
As described in a) data is split by BI into two data packages due to amount of data.
To avoid this behaviour I enterd a few more selection criteria within InfoPackage.
As a result I upload data a several time, each time with different selction criteria in InfoPackage.
Finally I have the same data in ODS as in a), but this time without having data deleted in step "Transformation".
Question: How is the general behaviour of BI when splitting data in several data packages?
BR,
ThorstenHi All,
Thanks a million for your help.
My conclusion from your answers are the following.
a) Since the ODS is Standard - within transformation no datasets are deleted but aggregated.
b) Uploading a huge amount of datasets is possible in two ways:
b1) with selction criteria in InfoPackage and several uploads
b2) without selction criteria in InfoPackage and therefore an automatic split of datasets in data packages
c) both ways should have the same result within the ODS
Ok. Thanks for that.
So far I have only checked the data within PSA. In PSA number of datasets are not equal for variant b1 and b2.
Guess this is normal technical behaviour of BI.
I am fine when results in ODS are the same for b1 and b2.
Have a nice day.
BR,
Thorsten
Maybe you are looking for
-
Best buffer size for BufferedInput/Output Streams?
I'm developing a client-server application where one server serves an undetermined number of clients. The server spins off threads to service each client individually. The clients send data (serialized objects) to each other via BufferedInput/OutputS
-
I can't type in my Microsoft Word 2008 anymore on MacbookPro, how do I fix this?
I have a MacbookPro from 2009 and my Microsoft Word 2008 won't let me type or use any keyboard functions on the page. I still have full use of my mouse/cursor. I also just recently updated to Yosemite. Help!
-
MacBOOK Air 11-inch Goes in Sleep/Idle Mode when Not Wanting it To
Hi, I have set my energy saving preferences when connected to the power-adapter to NEVER SLEEP on my MACBOOK AIR 11inch, however; after 20minutes no matter what program is up and running at the time (i.e. be it a moive, blank screen, etc.) it will go
-
How do you begin learning about Java?
Hi. I'm new to Java. I began taking a class at the local Community college where I live. I'm in my second term. It's a lot harder than the first. I'm not sure if I still understand the basics. I've written in COBOL, RPG I,II,III&IV, C, C++, C#, Pasca
-
Hp pronotebook 4530s 64bit windows 7 intel core i3
I had widows 7 installed when I got the thing a few years back. I let someone I thought knew what they were doing.....well it csme back messesd up. I cant get any browser to load so I cannot access the web. I dont know how to go back far enough in re