Trying boost the performance of RMAN on AIX5L
Hi all,
I need to find the best "number" for my test environment:
I'm looking for opinions based on knowledge of who has experience with RMAN.
I'm trying find better numbers of channels, maxopenfiles and additional parameters.
I accept the recommendation of changing the parameters of Operating System or Database
Environment Info:
DB: 10.2.0.5
ASM: 10.2.0.5
OS: AIX 5L
CPUs: 16
DISKGROUP: 1 (with 16 ASMDISK)
Adapter: 4 HBA (4GBIT)I want to find out what the maximum rate that can be achieved, then this test I will use only "BACKUP VALIDATE" When I get the best number i'll go to TSM the backup will be done via LAN-FREE.
Below the results of the tests using BACKUP VALIDATE:
=====================================================
1° Test
Database Parameter
backup_tape_io_slaves=TRUE
dbwr_io_slaves=0
db_writer_processes=6
tape_asynch_io= TRUE
disk_asynch_io= TRUE
RMAN Parameter
4 Channel
4 MAXOPENFILES for each Channel
TIME: 01:08:13
SIZE IN:892.11GB
RATE: 223MB/s
=====================================================
=====================================================
2° Test
Database Parameter
backup_tape_io_slaves=TRUE
dbwr_io_slaves=0
db_writer_processes=6
tape_asynch_io= TRUE
disk_asynch_io= TRUE
RMAN Parameter
4 Channel
3 MAXOPENFILES for each Channel
TIME: 00:57:42
SIZE IN:892.11GB
RATE: 264MB/s
=====================================================
=====================================================
3° Test - Here my best number
Database Parameter
backup_tape_io_slaves=TRUE
dbwr_io_slaves=0
db_writer_processes=6
tape_asynch_io= TRUE
disk_asynch_io= TRUE
RMAN Parameter
4 Channel
2 MAXOPENFILES for each Channel
TIME: 00:53:01
SIZE IN:892.11GB
RATE: 287MB/s
=====================================================I ran other test with 2 Channels and 3 MAXOPENFILES and perfomance was like 3° Test - about 54 minutes
I could not understand why using the maximum of 2 MAXOPENFILES and 4 Channels = 8 MAXOPENFILES is more performatic than if I increase the number parallelism of the datafiles reads.
My 1° test i can get 784 GB/hour
My 3° test i can get 1009 GB/hour
It's a big difference, because this test database (~ 900GB) is small compared with the production.
What makes sense for this to be optimal number (8 MAXOPENFILES)?
Thanks,
Levi Pereira
Hi Dude,
Thanks for the input.
Dude wrote:
I think the difference could be memory and process related combined with hardware limitations. RMAN will read the datafiles from disk into memory. Doesn't parallelism introduce additional channels and each channel writes to a separate backupset? From what I understand, too many I/O channels can reduce overall performance and cause additional overhead in particular if the media or disk is the bottleneck. On the other hand, If the cpu was the bottleneck, more cpu or processes can boost performance. When processes run on different cpu's, caching is less effective. The same applies to your HBA adapter. I monitored the whole process of backup and saw that the issue was always associated with performance of I/O the RMAN not was waiting for the CPU and also had no waits on disks. I knew I could increase my throughput but I do not know how. (i.e Resources was left over but did not know how exploring them.)
>
As a rule, the number of channels used in carrying out an individual RMAN command should match the number of physical devices accessed in carrying out that command. Striped disk configurations involve hardware multiplexing, so that the level of RMAN multiplexing does not need to be as high and a smaller MAXOPENFILES setting can results in faster performance.
I can use up to 4 Drives (4 Channels) that are LTO-5 that may have an acceptable rate of 120MB/s, so I can go up to 480Mb/s I know it's not exact, there are several extra factors.
I wish be sure which was the highest rate I could get by using four channels.
It remains unknown (x =?) this math.
16 (asmdisk) + 2 (HBA) + 256 (Datafiles) its ok to use 4 Channel + 2 MAXOPENFILES + 12 DB_WRITE
If I knew the math to find the ideal number of channels and maxopenfiles would be my great discovery.
I guess you already checked the Tuning Backup and Recovery chapter of the Database Backup and Recovery Advanced User's Guide, which also shows that you can use the V$BACKUP_SYNC_IO and V$BACKUP_ASYNC_IO views to determine the source of backup or restore bottlenecks and to see detailed progress of backup jobs. http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmtunin.htm#i1006195
I add more notes than I used:
RMAN Performance Tuning Using Buffer Memory Parameters [ID 1072545.1]
RMAN Performance Tuning Diagnostics [ID 311068.1]
Advise On How To Improve Rman Performance [ID 579158.1]
Using V$BACKUP_ASYNC_IO / V$BACKUP_SYNC_IO to Monitor RMAN Performance [ID 237083.1]
http://levipereira.wordpress.com/2010/11/20/tuning-oracle-rman-jobs/
I tried one more change:
On large workloads, the database writer may become a bottleneck. If it does, then increase the value of DB_WRITER_PROCESSES. As a general rule, do not increase the number of database writer processes above one for each pair of CPUs in the system or partition.
http://download.oracle.com/docs/cd/B28359_01/server.111/b32009/appa_aix.htm#BEHGGBAJ
I changed DB_WRITE_PROCESSES from 6 to 12.
My last RATE: 305MB/s.
Cheers,
Levi Pereira
Similar Messages
-
Boost the Performance of the database
I am getting a user call that the performance of the database is poor. I want to increase the performance of the database, can any one list out what are the checks and changes i have to do to increase the performance.
I am using topas command to find the top consume process in unix apart from this what are the area i have to look to boost the performance.
Help me in this regards.
Velthere is no one area where you can pinpoint and say this needs tuning. performance tuning needs to be addressed from all fronts. but you make one change at a time and see if it gives the desired improvement. the areas that you have to look are
1. table design
2. sql tuning, proper use of indexes
3. sizing the tables, indexes
4. setting up proper SGA parameters, if you have memory in the machine, make optimal use of it by allocating it to oracle.
5. use of procedures and functions.
you may or may not get a call from the user, but if you feel that something could be improved by tuning, i guess its the job of the DBA to do that and squeeze every bit of performance from the hardware and the software.
check out oracle performance tuning docs for more detailed info.
Mukundan. -
Help to boost the performance of my proxy server
Out of my personal interest, I am developing a proxy server in java for enterprises.
I've made the design as such the user's request would be given to the server through the proxy software and the response would hit the user's browsers through the proxy server.
User - > Proxy software - > Server
Server -> Proxy software -> User
I've designed the software in java and it is working
fine with HTTP and HTTPS requests.The problem which i am so scared is,
for each user request i am creating a thread to serve. So concurrently if 10000 users access the proxy server in same time,
I fear my proxy server would be bloated by consuming all the resources in the machine where the proxy software is installed.This is because,i'm using threads for serving the request and response.
Is there any alternative solution for this in java?
Somebody insisted me to use Java NIO.I'm confused.I need a solution
for making my proxy server out of performance issue.I want my
proxy server would be the first proxy server which is entirely
written in java and having a good performace which suits well for
even large organisations(Like sun java web proxy server which has been written in C).
How could i boost the performace?.I want the users should have no expereience of accessing the remote server through proxy.It would be like accessing the web server without a proxy for them.There should be not performance lagging.As fast as 'C Language'.I need to do this in java.Please help.I think having a thread per request is fine.Maybe I got it wrong, but I thought the point in
using NIO with sockets was to get rid of the 1 thread
per request combo?Correct. A server which has one thread per client doesn't scale well.
Kaj -
Doubts over the Performance in Developing a Chat application
We are developing a chat application which needs to update the chat content in the database for the duration of the chat (ie.,For the duration of a session).At the same time the page should refresh and show the current content on both ends.In addition to both these, the database tables has to be checked to detect the occurence of a Network error on both sides.
We have developed it as a Browser based chat and we have used PHP with MySQL. The performance is slow.
Can anyone give a suggestion as to whether we can develop the chat application completely from the scratch and if we do that which technology should we choose to boost the performance.
If anyone is not clear about my problem just mail me.I'll explain it in more detail.
Thanks in AdvanceHi,
I just wanted to know these following answers.
2) Network failure -- Means (either browswer got killed or data did not arrived/Page Not refreshed)
3) which WebSErver are U using?
From java, it is very easy to develop chatting application for which you can use applet and servlets for the same (if you consider the performance is the utmost importance)
Updating to the database should not be done for the duration levels rather should be done at the event levels(data change/keyboards/session -inf changes).
Im not sure about PHPs running at client but I can suggest you to use two frames .One is for typing and other one is for displaying information which basically gets the staus from the server abt other users in the chat either from the session, which would be driven by other component which is static) .
Any how, I just put my ideas (Im sure , you know all these things)
with regards
Lokesh T.c -
Boost import performance?
Hi,
I'm doing some database imports to 9.2.0 database and it takes 12 hours for the import. OK, it's a large dump file but I am wondering if there is anything I can do to boost the performance of the database during an import.
Thanks,
steve.Difficult to tell, but there are some general tips:
- Set a large buffer size.
- Set commit=Y.
- Import without indexes (indexes=N) and create the indexes and enable the FK constraints afterwards. You need to have a script ready for that of course. -
Trying to perform recovery RMAN in OEM. Fails!!!
I am trying to learn to do backups within OEM. I see it uses RMAN behind the scenes.
Anyway, I have 2 databases residing on one machine. A production database "a" and a clone it "b". I performed the cloning in OEM fine.
I then performed a full database backup on "b" using OEM which included the control files. This seemed to go well. I had them created on a mapped drive."s:\"
Now I am trying to use the perform recovery tools in OEM and I can't get anywhere. I am not familiar with RMAN, but I thought that was the purpose of the OEM to provide out of the box solution. The RMAN script generated behind the scenes looks like:
set dbid 1508042233;
set controlfile autobackup format for device type disk to '%F';
run {
allocate channel oem_restore type disk;
restore controlfile from autobackup;
shutdown immediate;
startup mount;
I changed the 2nd line to match my drive mapping 's:\%F';
My error message is as follows and yes it was wrapped this way!
Error in Restoring Control File
Recovery Manager: Release 9.0.1.5.1 - Production (c) Copyright 2001 Oracle Corporation. All rights reserved. RMAN> connected to target database: mycopy1 (not mounted) using target database controlfile instead of recovery catalog RMAN> executing command: SET DBID RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-00579: the following error occurred at 08/17/2005 20:57:06 RMAN-03002: failure during compilation of command RMAN-03013: command type: CSET RMAN-06188: cannot use command when connected to target database RMAN> executing command: SET CONTROLFILE AUTOBACKUP FORMAT RMAN> 2> 3> 4> allocated channel: oem_restore channel oem_restore: sid=8 devtype=DISK Starting restore at 17-AUG-05 released channel: oem_restore RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-00579: the following error occurred at 08/17/2005 20:57:07 RMAN-03002: failure during compilation of command RMAN-03013: command type: restore RMAN-03002: failure during compilation of command RMAN-03013: command type: IRESTORE RMAN-06495: must explicitly specify DBID with SET DBID command RMAN> Oracle instance shut down RMAN> connected to target database (not started) Oracle instance started database mounted Total System Global Area 151810128 bytes Fixed Size 282704 bytes Variable Size 100663296 bytes Database Buffers 50331648 bytes Redo Buffers 532480 bytes RMAN> Recovery Manager complete. Starting restore at 17-AUG-05 released channel: oem_restore RMAN-00571: ===============================================
I tried going to the command line and working with it
using the following script:
(Note here i tried working on database "a" which I also have a backup for)
set dbid 21410814117;
connect target
set controlfile autobackup format for device type disk to 'S:\%F';
run {
allocate channel oem_restore type disk;
restore controlfile from autobackup;
shutdown immediate;
startup mount;
*********************************88
RMAN> set dbid 21410814117;
executing command: SET DBID
RMAN> connect target
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00579: the following error occurred at 08/17/2005 21:18:43
RMAN-06189: current DBID 4294967295 does not match target mounted database (2141081417)
RMAN> set controlfile autobackup format for device type disk to 'S:\%F';
executing command: SET CONTROLFILE AUTOBACKUP FORMAT
using target database controlfile instead of recovery catalog
RMAN>
RMAN> run {
2> allocate channel oem_restore type disk;
3> restore controlfile from autobackup;
4> }
allocated channel: oem_restore
channel oem_restore: sid=10 devtype=DISK
Starting restore at 17-AUG-05
released channel: oem_restore
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00579: the following error occurred at 08/17/2005 21:18:45
RMAN-03002: failure during compilation of command
RMAN-03013: command type: restore
RMAN-03002: failure during compilation of command
RMAN-03013: command type: IRESTORE
RMAN-06496: must use the TO clause when the database is mounted or open
RMAN>
RMAN> shutdown immediate;
database dismounted
Oracle instance shut down
RMAN> startup mount;
connected to target database (not started)
Oracle instance started
database mounted
Total System Global Area 151810128 bytes
Fixed Size 282704 bytes
Variable Size 100663296 bytes
Database Buffers 50331648 bytes
Redo Buffers 532480 bytes
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00579: the following error occurred at 08/17/2005 21:19:01
RMAN-06189: current DBID 4294967295 does not match target mounted database (2141081417)
from what I understand my two databases dbid's are "a"=21410814117 and
"b" is 1508042233 I have no idea where this other one is coming from.
Any help would be appreciated. I wouldn't mind being able to do it at the command line, so at least I would have an idea of what is going on, but the OEM doesn't seem to work at all.
Thanks,
JohnDo one thng. Keep the target db in nomount mode and then run recovery using OEM.
Regards,
http://askyogesh.com -
Anyone actually tried out the gaming performance?
Hey so I'm looking to get a new MacBook and was just hoping someone out there already to have one could slap something like the doom3 demo in just so we could hear first hand what its like. I realize its not a gaming computer and I'm not hoping for 40fps at high on graphically intense games, gaming will not be the main use of the laptop if I get one, I am just curious to hear how it is. I have actually gone throught the doom3 demo on my emac (despite the fact it is way below the requirements) with everything turned down and it wasn't bad as long as nothing onscreen moved. Anyways it would be cool to hear what game you tried, what kind of settings and whether it was smooth or a little choppy or what.
Thanks,
SteveI installed Warcraft on my wife's new MacBook. It is the low end model (1.83ghz) with 1gb of RAM. She was happy with the performance. I was not impressed.
Here are some metrics. I bumped all settings down to the minimum. The shader options are not available as integrated intel video *****. The terrain distance option was bumped to the 1/4 position (meaning just a bit longer than the shortest viewable terrain distance). No vertical sync. Hardware cursor and mouse were both enabled.
Machine lag was not an issue. Stormwind and Ironforge both were responsive enough, even when crossing into the drop zone near the Ironforge Auction house.
The framerates were not very good though. In heavily populated areas it was looking at 14-24 frames per second. It was smooth "enough" but anything less than 24 frames makes it not look as smooth.
The framerates in other areas was better but still not great. Now, it's time for a comparison...
Her Sony Vaio V505-DX with an onboard ATI 9200 at only 32mb performs much better at the same settings. The game is always played with shaders turned on, terrain about the same and all other settings on low. It always manages at least 30 frames per second.
The machine if you're curious:
http://www.amazon.com/gp/product/B0000D90BR/102-6634257-7920946?v=glance&n=54196 6
I'd say the move to using integrated video was a poor decision. I don't expect crazy performance but this move has further pushed the Mac away from mid-range gamers and potentially game developers. I would not game on one. My wife games maybe an hour or two an evening (we don't watch much TV) so this is an alright fit for her.
If Apple really wanted to bring the games to the Mac platform they would make it attractive to both parties (developers and consumers) but I don't feel they did a good job on that one. -
My whole document comes out as low res when I export to PDF, even the indesign file also now low res. Tried changing display performance, but that's not helping.
Are your links up to date? What settings did you use?
-
Bc4j,dacf:[3.2.3] Searching for the performance boost switch
As I mentioned in the thread http://technet.oracle.com:89/ubb/Forum2/HTML/008025.html
I am investigating the performance of an application.
Some new Information on this. I get some speedup by deactivating the locking. But the app is not really responsive. Paging down a grid takes about 4-6sec per page. Another test showed that there is no difference if I use one table or a join.
Now I made some JSP. These are running real fast. The normal pages load under 2sec.
So far I can see the problems seems to in the communication between the BC4J and the DACs.
Has anyone some optimization suggestion? (I use the generated frames and JSPs)
BerndHi Bernd,
currently I try to speed up our application that is based on applets and DACFs in local deployment mode, too.
Here are my results and I would like to discuss my observations within this thread:
- Startup time (login) about 30 sec. is normal and can't be improved programmatically.
- Use JDK 1.3 (The Swing stuff is improved a lot and the dacs are derived from them).
- Use Lov-Controls instead of Comboboxes wherever possible.
- Trace the sql-statements generated by the business-components and analyze the results. Mybe your code executes some queries unnecessarily.
- Build a prototype of a form with high complexity (much business logic) and check if this speed is sufficent for all your clients - this should be a KO-Criteria! If this test fails, you have to search for another architecture (JSPs).
But the most boostable advice is this: Try to reduce the data-transfer between your forms and the database as much as you can.
* Using a grid control is very convenient, but is it really necessary to fill it with a statment like SELECT * FROM BLAH_TABLE...?
* SetRestrictedQuery(...) with a parameter to reduce the transfer fo the query results of the LOV control.
* ViewLinks are fine - they fetch the detailed information only when its necessary.
* I doubt, that it will be a good three tier server side component design, but what about defining a View for complex joins in the database and wrap a bc-entity around it (of course you have to rethink your UI) instead of setting up different entities, views and view-links in the bc-layer?
What I really want to know is, if putting the bc-tier in the database will end up in better response time for applets. I won't go on this last adventure trip, if I'm not convinced that it will be worth the effort.
Can anyone admit or am I wrong in some points?
Have fun!
@i -
Re: How to Improve the performance on Rollup of Aggregates for PCA Infocube
Hi BW Guru's,
I have unresolved issue and our team is still working on it.
I have already posted several questions on this but not clear on how to reduce the time on Rollup of Aggregates process.
I have requested for OSS note and searching myself but still could not found.
Finally i have executed one of the cube in RSRV with the database selection
"Database indexes of an InfoCube and its aggregates" and got warning messages i was tried to correct the error and executed once again but still i found warning message. and the error message are as follows: (this is only for one info cube we got 6 info cubes i am executing one by one).
ORACLE: Index /BI0/IACCOUNT~0 has possibly degenerated
ORACLE: Index /BI0/IPROFIT_CTR~0 has possibly degenerated
ORACLE: Index /BI0/SREQUID~0 has possibly degenerated
ORACLE: Index /BIC/D1001072~010 has possibly degenerated
ORACLE: Index /BIC/D1001132~010 has possibly degenerated
ORACLE: Index /BIC/D1001212~010 has possibly degenerated
ORACLE: Index /BIC/DGPCOGC062~01 has possibly degenerated
ORACLE: Index /BIC/IGGRA_CODE~0 has possibly degenerated
ORACLE: Index /BIC/QGMAPGP1~0 has possibly degenerated
ORACLE: Index /BIC/QGMAPPC2~0 has possibly degenerated
ORACLE: Index /BIC/SGMAPGP1~0 has possibly degenerated
i don't know how to move further on this can any one tell me how to tackle this problem to increase the performance on Rollup of Aggregates (PCA Info cubes).
every time i use to create index and statistics regularly to improve the performance it will work for couple of days and again the performance of the rollup of aggregates come down gradually.
Thanks and Regards,
Venkathi,
check in a sql client the sql created by Bi and the query that you use directy from your physical layer...
The time between these 2 must be 2-3 seconds,otherwise you have problems.(these seconds are for scripts that needed by Bi)
If you use "like" in your sql then forget indexes....
For more informations about indexes check google or your Dba .
Last, i mentioned that materialize view is not perfect,it help a lot..so why not try to split it to smaller ones....
ex...
logiacal dimensions
year-half-day
company-department
fact
quantity
instead of making one...make 3,
year - department - quantity
half - department - quantity
day - department - quantity
and add them as datasource and assign them the appropriate logical level at bussiness layer in administrator...
Do you use partioning functionality???
i hope i helped....
http://greekoraclebi.blogspot.com/
/////////////////////////////////////// -
How to Improve the Performance of SQL Server and/or the hardware it resides on?
There's a particular stored procedure I call from my ASP.NET 4.0 Web Forms app that generates the data for a report. Using SQL Server Management Studio, I did some benchmarking today and found some interesting results:
FYI SQL Server Express 2014 and the same DB reside on both computers involved with the test:
My laptop is a 3 year old i7 computer with 8GB of RAM. It's fine but one would no longer consider it a "speed demon" compared to what's available today. The query consistently took 30 - 33 seconds.
My client's server has an Intel Xeon 5670 Processor and 12GB of RAM. That seems like pretty good specs. However, the query consistently took between 120 - 135 seconds to complete ... about 4 times what my laptop did!
I was very surprised by how slow the server was. Considering that it's also set to host IIS to run my web app, this is a major concern for me.
If you were in my shoes, what would be the top 3 - 5 things you'd recommend looking at on the server and/or SQL Server to try to boost its performance?
RobertWhat else runs on the server besides IIS and SQL ? Is it used for other things except the database and IIS ?
Is IIS causing a lot of I/O or CPU usage ?
Is there a max limit set for memory usage on SQL Server ? There SHOULD be and since you're using IIS too you need to keep more memory free for that too.
How is the memory pressure (check PLE counter) and post results.
SELECT [cntr_value] FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Buffer Manager%' AND [counter_name] = 'Page life expectancy'
Check the error log and the event viewer maybe something bad there.
Check the indexes for fragmenation, see if the statistics are up to date (and enable trace flag 2371 if you have large tables > 1 million rows)
Is there an antivirus present on the server ? Do you have SQL processes/services/directories as exceptions ?
There are lot of unknowns, you should run at least profiler and post results to see what goes on while you're having slow responses.
"If there's nothing wrong with me, maybe there's something wrong with the universe!" -
Help to improve the performance of a procedure.
Hello everybody,
First to introduce myself. My name is Ivan and I recently started learning SQL and PL/SQL. So don't go hard on me. :)
Now let's jump to the problem. What we have there is a table (big one, but we'll need only a few fields) with some information about calls. It is called table1. There is also another one, absolutely the same structure, which is empty and we have to transfer the records from the first one.
The shorter calls (less than 30 minutes) have segmentID = 'C1'.
The longer calls (more than 30 minutes) are recorded as more than one record (1 for every 30 minutes). The first record (first 30 minutes of the call) has segmentID = 'C21'. It is the first so we have only one of these for every different call. Then we have the next (middle) parts of the call, which have segmentID = 'C22'. We can have more than 1 middle part and again the maximum minutes in each is 30 minutes. Then we have the last part (again max 30 minutes) with segmentID = 'C23'. As with the first one we can have only one last part.
So far, so good. Now we need to insert these call records into the second table. The C1 are easy - one record = one call. But the partial ones we need to combine so they become one whole call. This means that we have to take one of the first parts (C21), find if there is a middle part (C22) with the same calling/called numbers and with 30 minutes difference in date/time, then search again if there is another C22 and so on. And last we have to search for the last part of the call (C23). In the course of these searches we sum the duration of each part so we can have the duration of the whole call at the end. Then we are ready to insert it in the new table as a single record, just with new duration.
But here comes the problem with my code... The table has A LOT of records and this solution, despite the fact that it works (at least in the tests I've made so far), it's REALLY slow.
As I said I'm new to PL/SQL and I know that this solution is really newbish, but I can't find another way of doing this.
So I decided to come here and ask you for some tips on how to improve the performance of this.
I think you are getting confused already, so I'm just going to put some comments in the code.
I know it's not a procedure as it stands now, but it will be once I create a better code. I don't think it matters for now.
DECLARE
CURSOR cur_c21 IS
select * from table1
where segmentID = 'C21'
order by start_date_of_call; // in start_date_of_call is located the beginning of a specific part of the call. It's date format.
CURSOR cur_c22 IS
select * from table1
where segmentID = 'C22'
order by start_date_of_call;
CURSOR cur_c22_2 IS
select * from table1
where segmentID = 'C22'
order by start_date_of_call;
cursor cur_c23 is
select * from table1
where segmentID = 'C23'
order by start_date_of_call;
v_temp_rec_c22 cur_c22%ROWTYPE;
v_dur table1.duration%TYPE; // using this for storage of the duration of the call. It's number.
BEGIN
insert into table2
select * from table1 where segmentID = 'C1'; // inserting the calls which are less than 30 minutes long
-- and here starts the mess
FOR rec_c21 IN cur_c21 LOOP // taking the first part of the call
v_dur := rec_c21.duration; // recording it's duration
FOR rec_c22 IN cur_c22 LOOP // starting to check if there is a middle part for the call
IF rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND
(rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48)
/* if the numbers are the same and the date difference is 30 minutes then we have a middle part and we start searching for the next middle. */
THEN
v_dur := v_dur + rec_c22.duration; // updating the new duration
v_temp_rec_c22:=rec_c22; // recording the current record in another variable because I use it for the next check
FOR rec_c22_2 in cur_c22_2 LOOP
IF rec_c22_2.callingnumber = v_temp_rec_c22.callingnumber AND rec_c22_2.callednumber = v_temp_rec_c22.callednumber AND
(rec_c22_2.start_date_of_call - v_temp_rec_c22.start_date_of_call) = (1/48)
/* logic is the same as before but comparing with the last value in v_temp...
And because the data in the cursors is ordered by date in ascending order it's easy to search for another middle parts. */
THEN
v_dur:=v_dur + rec_c22_2.duration;
v_temp_rec_c22:=rec_c22_2;
END IF;
END LOOP;
END IF;
EXIT WHEN rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND
(rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48);
/* exiting the loop if we have at least one middle part.
(I couldn't find if there is a way to write this more clean, like exit when (the above if is true) */
END LOOP;
FOR rec_c23 IN cur_c23 LOOP
IF (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
(rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration
/* we should always have one last part, so we need this check.
If we don't have the "v_dur != rec_c21.duration" part it will execute the code inside only if we don't have middle parts
(yes we can have these situations in calls longer than 30 and less than 60 minutes). */
THEN
v_dur:=v_dur + rec_c23.duration;
rec_c21.duration:=v_dur; // updating the duration
rec_c21.segmentID :='C1';
INSERT INTO table2 VALUES rec_c21; // inserting the whole call in table2
END IF;
EXIT WHEN (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
(rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration;
// exit the loop when the last part has been found.
END LOOP;
END LOOP;
END;I'm using Oracle 11g and version 1.5.5 of SQL Developer.
It's my first post here so hope this is the right sub-forum.
I tried to explain everything as deep as possible (sorry if it's too long) and I kinda think that the code got somehow hard to read with all these comments. If you want I can remove them.
I know I'm still missing a lot of knowledge so every help is really appreciated.
Thank you very much in advance!Atiel wrote:
Thanks for the suggestion but the thing is that segmentID must stay the same for all. The data in this field is just to tell us if this is a record of complete call (C1) or a partial record of a call(C21, C22, C23). So in table2 as every record will be a complete call the segmentID must be C1 for all.Well that's not a problem. You just hard code 'C1' instead of applying the row number as I was doing:
SQL> ed
Wrote file afiedt.buf
1 select 'C1' as segmentid
2 ,start_date_of_call, duration, callingnumber, callednumber
3 from (
4 select distinct
5 min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
6 ,sum(duration) over (partition by callingnumber, callednumber) as duration
7 ,callingnumber
8 ,callednumber
9 from table1
10* )
SQL> /
SEGMENTID START_DATE_OF_CALL DURATION CALLINGNUMBER CALLEDNUMBER
C1 11-MAY-2012 12:13:10 8020557824 1982032041 0631432831624
C1 15-MAR-2012 09:07:26 269352960 5581790386 0113496771567
C1 31-JUL-2012 23:20:23 134676480 4799842978 0813391427349
Another thing is that, as I said above, the actual table has 120 fields. Do I have to list them all manually if I use something similar?If that's what you need, then yes you would have to list them. You only get data if you tell it you want it. ;)
Of course if you are taking the start_date_of_call, callingnumber and callednumber as the 'key' to the record, then you could join the results of the above back to the original table1 and pull out the rest of the columns that way...
SQL> select * from table1;
SEGMENTID START_DATE_OF_CALL DURATION CALLINGNUMBER CALLEDNUMBER COL1 COL2 COL3
C1 31-JUL-2012 23:20:23 134676480 4799842978 0813391427349 556 40 5.32
C21 15-MAR-2012 09:07:26 134676480 5581790386 0113496771567 219 100 10.16
C23 11-MAY-2012 09:37:26 134676480 5581790386 0113496771567 321 73 2.71
C21 11-MAY-2012 12:13:10 3892379648 1982032041 0631432831624 959 80 2.87
C22 11-MAY-2012 12:43:10 3892379648 1982032041 0631432831624 375 57 8.91
C22 11-MAY-2012 13:13:10 117899264 1982032041 0631432831624 778 27 1.42
C23 11-MAY-2012 13:43:10 117899264 1982032041 0631432831624 308 97 3.26
7 rows selected.
SQL> ed
Wrote file afiedt.buf
1 with t2 as (
2 select 'C1' as segmentid
3 ,start_date_of_call, duration, callingnumber, callednumber
4 from (
5 select distinct
6 min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
7 ,sum(duration) over (partition by callingnumber, callednumber) as duration
8 ,callingnumber
9 ,callednumber
10 from table1
11 )
12 )
13 --
14 select t2.segmentid, t2.start_date_of_call, t2.duration, t2.callingnumber, t2.callednumber
15 ,t1.col1, t1.col2, t1.col3
16 from t2
17 join table1 t1 on ( t1.start_date_of_call = t2.start_date_of_call
18 and t1.callingnumber = t2.callingnumber
19 and t1.callednumber = t2.callednumber
20* )
SQL> /
SEGMENTID START_DATE_OF_CALL DURATION CALLINGNUMBER CALLEDNUMBER COL1 COL2 COL3
C1 11-MAY-2012 12:13:10 8020557824 1982032041 0631432831624 959 80 2.87
C1 15-MAR-2012 09:07:26 269352960 5581790386 0113496771567 219 100 10.16
C1 31-JUL-2012 23:20:23 134676480 4799842978 0813391427349 556 40 5.32
SQL>Of course this is pulling back the additional columns for the record that matches the start_date_of_call for that calling/called number pair, so if the values differed from row to row within the calling/called number pair you may need to aggregate those (take the minimum/maximum etc. as required) as part of the first query. If the values are known to be the same across all records in the group then you can just pick them up from the join to the original table as I coded in the above example (only in my example the data was different across all rows). -
How to optimize the performance of crystal report?
Hi,
-I have to design a crystal report with best possible optimization. Optimization is main concern since report will run against 1-2 million data set. Though I am using parameter to fetch only the required data, required data can go till 1 million records.
-Based on the input passed by the user I have to group the data in report. And for each selected parameter the Detail section I am printing is different. For example:-If user selects store then detail section is different and if user select Host then detail section will be different.
-Report can be grouped by Time Field also. So to full fill this requirement I would have to create a sub report since other parameters are of string type and can be used in one formula to get parameter based grouping in report. However If I try to return Time filed from the same formula I get the errors the "Return type should be of String typeu201D. This forces me to create sub report for Time based grouping. If user selects Time Field to be grouped on, all the information in the main report gets suppressed and only the sub report gets printed.
If user select store, Host and User in parameter to be grouped on, sub report gets suppressed.
Now with the above mentioned points I tried to optimize the report in following way.
-Printing 1 million records in the report does not make sense; hence we wanted to show the summary of all the records in chart section but wanted to print just 5000 records in detailed section. Suppressing detailed section after 5000 records does not help much since suppressing just saves the time in printing and does not limit the number of records to be fetched from the DB.I have a subreport also so it fetches the data 2 times from the DB hence makes the performance of the report worse.
To solve this problem I used command object and put the charts in the subreport and detail in main report.
In main report's Command Object I limited the number to records to be fetched from the DB to 5000 using rownum<5000 but in subreport's Command Object I did not set any limit in the query but I am doing all my aggregation in SQL which means do summary operation in DB and get only summarized data from DB.
-To solve section problem I am using Template object (new feature added in CR 2008).In which I am returning the field based on the "Group By" parameter selected by user.
-For time Field I have created two sub reports, one for chart and other one for details in the same way described in point one(Printing 1 million recordsu2026u2026).
After implementing these points my crystal reports performance improved drastically. The report that was taking 24 minute to come back now taking only 2 minutes.
However I want my report to come back with one minute. It returns if I remove the sub reports for Time Based Grouping but I can not do so.
My questions here are,
-Can I stop Subreport from fetching the data from DB if itu2019s suppressed?
-I believe using Conditional Template Object is a better option rather than having multiple detailed sections to print the data for a selected Group. However any suggestion here to improve the performance will be appreciable.
-since crystal report does not provide any option to limit the number of records to be fetched from DB, I am forced to use command object with rownum in where condition.
Please let me know about other option(s) to get this done...If there is any.
I am using Crystal report 2008.And we have developed our application the use JRC to export crystal report in PDF.
Regards,
Amrita
Edited by: Amrita Singh on May 12, 2009 11:36 AM1) I have to design a crystal report with best possible optimization. Optimization is main concern since report will run against 1-2 million data set. Though I am using parameter to fetch only the required data, required data can go till 1 million records.
2) Based on the input passed by the user I have to group the data in report. And for each selected parameter the Detail section I am printing is different. For example:-If user selects store then detail section is different and if user select Host then detail section will be different.
3) Report can be grouped by Time Field also. So to full fill this requirement I would have to create a sub report since other parameters are of string type and can be used in one formula to get parameter based grouping in report. However If I try to return Time filed from the same formula I get the errors the "Return type should be of String typeu201D. This forces me to create sub report for Time based grouping. If user selects Time Field to be grouped on, all the information in the main report gets suppressed and only the sub report gets printed.
If user select store, Host and User in parameter to be grouped on, sub report gets suppressed.
Now with the above mentioned points I tried to optimize the report in following way.
1) Printing 1 million records in the report does not make sense; hence we wanted to show the summary of all the records in chart section but wanted to print just 5000 records in detailed section. Suppressing detailed section after 5000 records does not help much since suppressing just saves the time in printing and does not limit the number of records to be fetched from the DB.I have a subreport also so it fetches the data 2 times from the DB hence makes the performance of the report worse.
To solve this problem I used command object and put the charts in the subreport and detail in main report.
In main report's Command Object I limited the number to records to be fetched from the DB to 5000 using rownum<5000 but in subreport's Command Object I did not set any limit in the query but I am doing all my aggregation in SQL which means do summary operation in DB and get only summarized data from DB.
2)To solve section problem I am using Template object (new feature added in CR 2008).In which I am returning the field based on the "Group By" parameter selected by user.
Edited by: Amrita Singh on May 12, 2009 12:26 PM -
My computer is saying it can't find the file 'iTunes64.msi' when I try to update or redownload iTunes how do I fix this? I've tried just about everything I can think of. I am using a dell laptop running windows 7 and have tried changing the file location to run the update in all of my iTunes files as well as when I've tried reinstalling it.
For general advice see Troubleshooting issues with iTunes for Windows updates.
The steps in the second box are a guide to removing everything related to iTunes and then rebuilding it which is often a good starting point unless the symptoms indicate a more specific approach. Review the other boxes and the list of support documents further down the page in case one of them applies.
The further information area has direct links to the current and recent builds in case you have problems downloading, need to revert to an older version or want to try the iTunes for Windows (64-bit - for older video cards) release as a workaround for installation or performance issues, or compatibility with QuickTime or third party software.
Your library should be unaffected by these steps but there are also links to backup and recovery advice should it be needed.
tt2 -
My MacBook Pro is running very slow is there anything I can do to improve the performance?
Is there anything I can do to improve the speed of my MacBook Pro?
Kappy's Personal Suggestions About OS X Maintenance
For disk repairs use Disk Utility. For situations DU cannot handle the best third-party utility is: Disk Warrior; DW only fixes problems with the disk directory, but most disk problems are caused by directory corruption. Drive Genius provides additional tools not found in Disk Warrior for defragmentation of older drives, disk repair, disk scans, formatting, partitioning, disk copy, and benchmarking.
Four outstanding sources of information on Mac maintenance are:
1. OS X Maintenance - MacAttorney.
2. Mac maintenance Quick Assist
3. Maintaining Mac OS X
4. Mac Maintenance Guide
Periodic Maintenance
OS X performs certain maintenance functions that are scheduled to occur on a daily, weekly, or monthly period. The maintenance scripts run in the early AM only if the computer is turned on 24/7 (no sleep.) See Mac OS X- About background maintenance tasks. If you are running Leopard or later these tasks are run automatically, so there is no need to use any third-party software to force running these tasks.
If you are using a pre-Leopard version of OS X, then an excellent solution is to download and install a shareware utility such as Macaroni, JAW PseudoAnacron, or Anacron that will automate the maintenance activity regardless of whether the computer is turned off or asleep. Dependence upon third-party utilities to run the periodic maintenance scripts was significantly reduced after Tiger. (These utilities have limited or no functionality with Snow Leopard, Lion, or Mountain Lion and should not be installed.)
Defragmentation
OS X automatically defragments files less than 20 MBs in size, so unless you have a disk full of very large files there's little need for defragmenting the hard drive except when trying to install Boot Camp on a fragmented drive. But you don't need to buy third-party software. All you need is a spare external hard drive and Carbon Copy Cloner.
Cheap and Easy Defragmentation
You will have to backup your OS X partition to an external drive, boot from the external drive, use Disk Utility to repartition and reformat your hard drive back to a single volume, then restore your backup to the internal hard drive. You will use Carbon Copy Cloner to create the backup and to restore it.
1. Get an empty external hard drive and clone your internal drive to the
external one.
2. Boot from the external hard drive.
3. Erase the internal hard drive.
4. Restore the external clone to the internal hard drive.
Clone the internal drive to the external drive
1. Open Carbon Copy Cloner.
2. Select the Source volume from the left side dropdown menu.
3. Select the Destination volume from the left side dropdown menu.
4. Be sure the Block Copy button is not depressed or is ghosted.
5. Click on the Clone button.
Destination means the external backup drive. Source means the internal startup drive.
Restart the computer and after the chime press and hold down the OPTION key until the boot manager appears. Select the icon for the external drive and click on the upward pointing arrow button.
After startup do the following:
Erase internal hard drive
1. Open Disk Utility in your Utilities folder.
2. After DU loads select your internal hard drive (this is the entry with the
mfgr.'s ID and size) from the left side list. Note the SMART status of the
drive in DU's status area. If it does not say "Verified" then the drive is
failing or has failed and will need replacing. SMART info will not be
reported on external drives. Otherwise, click on the Partition tab in the
DU main window.
3. Under the Volume Scheme heading set the number of partitions from the
drop down menu to one. Set the format type to Mac OS Extended
(Journaled.) Click on the Options button, set the partition scheme to
GUID then click on the OK button. Click on the Partition button and wait
until the process has completed.
Restore the clone to the internal hard drive
1. Open Carbon Copy Cloner.
2. Select the Source volume from the left side dropdown menu.
3. Select the Destination volume from the left side dropdown menu.
4. Be sure the Block Copy button is not selected or is ghosted.
5. Click on the Clone button.
Destination means the internal hard drive. Source means the external startup drive.
Note that the Source and Destination drives are swapped for this last procedure.
Malware Protection
As for malware protection there are few if any such animals affecting OS X. Starting with Lion Apple has included built-in malware protection that is automatically updated as necessary.
Helpful Links Regarding Malware Protection:
1. Mac Malware Guide.
2. Detecting and avoiding malware and spyware
3. Macintosh Virus Guide
For general anti-virus protection I recommend only using ClamXav, but it is not necessary if you are keeping your computer's operating system software up to date. You should avoid any other third-party software advertised as providing anti-malware/virus protection. They are not required and could cause the performance of your computer to drop.
Cache Clearing
I recommend downloading a utility such as TinkerTool System, OnyX 2.4.3, Mountain Lion Cache Cleaner 7.0.9, Maintenance 1.6.8, or Cocktail 5.1.1 that you can use for periodic maintenance such as removing old log files and archives, clearing caches, etc. Corrupted cache files can cause slowness, kernel panics, and other issues. Although this is not a frequent nor a recurring problem, when it does happen there are tools such as those above to fix the problem.
If you are using Snow Leopard or earlier, then for emergency cleaning install the freeware utility Applejack. If you cannot start up in OS X, you may be able to start in single-user mode from which you can run Applejack to do a whole set of repair and maintenance routines from the command line. Note that AppleJack 1.5 is required for Leopard. AppleJack 1.6 is compatible with Snow Leopard. (AppleJack works with Snow Leopard or earlier.)
Installing System Updates or Upgrades
When you install any new system software or updates be sure to repair the hard drive and permissions beforehand.
Backup and Restore
Having a backup and restore strategy is one of the most important things you can do to maintain your computer. Get an external Firewire drive at least equal in size to the internal hard drive and make (and maintain) a bootable clone/backup. You can make a bootable clone using the Restore option of Disk Utility. You can also make and maintain clones with good backup software. My personal recommendations are (order is not significant):
1. Carbon Copy Cloner.
2. Deja Vu
3. SuperDuper!
4. Synk Pro
5. Tri-Backup
Visit The XLab FAQs and read the FAQs on maintenance and backup and restore.
Always have a current backup before performing any system updates or upgrades.
Be sure you have an adequate amount of RAM installed for the number of applications you run concurrently. Be sure you leave a minimum of 10% of the hard drive's capacity or 20 GBs, whichever is greater, as free space. Avoid installing utilities that rely on Haxies, SIMBL, or that alter the OS appearance, add features you will rarely if ever need, etc. The more extras you install the greater the probability of having problems. If you install software be sure you know how to uninstall it. Avoid installing multiple new software at the same time. Install one at a time and use it for a while to be sure it's compatible.
Additional suggestions will be found in:
1. Mac OS X speed FAQ
2. Speeding up Macs
3. Macintosh OS X Routine Maintenance
4. Essential Mac Maintenance: Get set up
5. Essential Mac Maintenance: Rev up your routines
6. Five Mac maintenance myths
7. How to Speed up Macs
8. Myths of required versus not required maintenance for Mac OS X
Referenced software can be found at CNet Downloads or MacUpdate.
Add more RAM or run fewer applications concurrently.
Maybe you are looking for
-
BAPI Table Input not passing to R/3 Backend
Hello Experts, I am having trouble populating a Table input in my Web Dynpro Java application. I have tried researching this issue on SDN and have attemped many of the solutions with no luck. Here is my code: Bapi_Order_Input NewOrder = n
-
AnnotatedNoClassDefFound error when using OracleXMLSQL (XSU) in oc4j 10.1.3
This topic has been canvassed before in this forum, however, none of the replies/suggestions have worked in my case. I am running on a Solaris x86 5.10 64 bit server, oc4j is 10.1.3 standalone production, jdeveloper is 10.1.3 production. My app is us
-
Installing the Business Content Objects from R/3 in to BW
From R/3 system with RSA5 (Installation DataSource from Business Content), I have selected 0PF_FUND_ATTR (Pension Fund in Personnel Management) and clicked the 'Transfer DataSource' and it was successful. Then I have logged on to BW and 'Replicated t
-
Where is the hiccup in comiling this test servelet
Hi, I am working on "WINDOWS 2000 WORKSTATION". I have installed TOMCAT in the following directory: c:\jakarta-tomcat-4.1.12 I have the JAVA J2SE installed in the following directory:c:\jdk1.3.1 Now the following are the configurations of class and c
-
Hi, How can I change the language of the java.sql.Exception in Jdev 2.0??? I will set in French language. Thanks. Bart LEBOEUF Syscom Ing nierie Chemin du Mas de l'Arbre - 66350 - Toulouges - France Tel (33)- 468545078 - Fax (33)- 468545464 Email