Total after every 25 records
Dear Friiends,
I would like to write down query which also returns the total of some columns after every 25 records.
like this
ccno salary
1 5000
2 10000
25 80000
total <total of above 25>
26 25000
27 10000
50 13000
total <total of above 50>
can we achieve this
Waiting for reply .
with tab as (
select 1 ccno,100 salary from dual union all
select 2 ccno,200 salary from dual union all
select 3 ccno,300 salary from dual union all
select 4 ccno,400 salary from dual union all
select 5 ccno,500 salary from dual union all
select 6 ccno,600 salary from dual union all
select 7 ccno,700 salary from dual
)--end of test data
select ccno,
salary,
case when mod(row_number() over (order by ccno), 3) = 0 then sum(salary) over (order by ccno) else null end as sumsal
from tab
CCNO SALARY SUMSAL
1 100
2 200
3 300 600
4 400
5 500
6 600 2100
7 700
7 rows selectedChange the 3 in the mod to 25 for your data
Similar Messages
-
Totals after every group of rows - formating help
Hi,
What I need is to make the cells darker where the totals are after every group of rows. How do I do that?
Also, I'm trying to remove the fact that they're merged cells for downloading to Excel (on another report for further analytics).
Ex: a group of rows show multiple opportunities for an account. The Total NSR value (Revenue) for all the rows for the one Account is displayed after the rows and I don't want this on one of our reports.
Thanks,
AnitaAre you talking about some pre-built report or a custom report you have created?
-
Getting extra line after each record when opening a .txt file in excel
Hi All,
I have developed a program which downloads a file at application server.
each record of file is 500 characters long & have CRLF at end.
the file looks fine when opened in .txt format.
however when i download it from application server to presentation server (using function "download to my computer"), & at presentation when i try to open it in excel format, it shows a blank line after every record.
i don't want this blank line to appear if i download it & open it in excel.
the file record is declared as char500 type.
Please suggest how to deal with this.
thanks in advance.
Regards,
Puja.Hi Puja,
Check the file in the application server whether it has any gaps between the lines.
Or else as you said if the file looks ok in .txt format, download the file in .txt and open the same file in excel (i.e. open with excel)
Hope this sloves your problem.
Regards,
SB. -
Commit in procedures after every 100000 records possible?
Hi All,
I am using an ODI procedure to insert data into a table.
I checked that in the ODI procedure there is an option of selecting transaction and setting the commit option as 'Commit after every 1000 records'.
Since the record count to be inserted is 38489152, I would like to know is this option configurable.
Can i ensure that commits are made at a logical step of 100000 instead of 1000 records?
Thank You.
Prernarecently added on this
http://dwteam.in/commit-interval-in-odi/
Thanks
Bhabani
http://dwteam.in -
Commit after every 1000 records
Hi dears ,
i have to update or insert arround 1 lakhs records every day incremental basis,
while doing it , after completing all the records commit happens, In case some problem in between all my processed records are getting rollbacked,
I need to commit it after every frequency of records say 1000 records.
Any one know how to do it??
Thanks in advance
Regards
RajaRaja,
There is an option in the configuration of a mapping in which you can set the Commit Frequency. The Commit Frequency only applies to non-bulk mode mappings. Bulk mode mappings commit according the bulk size (which is also an configuration setting of the mapping).
When you set the Default Operating Mode to row based and Bulk Processing Code to false, Warehouse Builder uses the Commit Frequency parameter when executing the package. Warehouse Builder commits data to the database after processing the number of rows specified in this parameter.
If you set Bulk Processing Code to true, set the Commit Frequency equal to the Bulk Size. If the two values are different, Bulk Size overrides the commit frequency and Warehouse Builder implicitly performs a commit for every bulk size.
Regards,
Ilona -
ResultSet "hangs" after every 10 records
Hi
Please could you somebody help me.
I have extracted a ResultSet from a database which contains between 100 and 200 records (5 field each).
If I call rset.next(), printing a count after each call my program hangs for about 2 minutes after every 10 records.
For example:
int count = 0;
while(rset.next()) {
System.out.println("" + ++count);
Prints:
1
2
3
4
5
6
7
8
9
10
Waits here for two minutes and then carries on
11
20
Waits again for 2 minutes etc.
Has anyone had this problem or does anyone know how to fix it?
FYI: prstat reports tiny CPU and memory usage so the hardware is not responsible.
Thanks a lot in advanceHi All
It must be network - setFetchSize is unsupported in both Statement and ResultSet in the driver set I am using.
It is running through a 10baseT switch at the moment which may be the problem so I will stick it on the backbone and try again.
Thanks again for your help. -
Need to commit after every 10 000 records inserted ?
What would be the best way to Commit after every 10 000 records inserted from one table to the other using the following script :
DECLARE
l_max_repa_id x_received_p.repa_id%TYPE;
l_max_rept_id x_received_p_trans.rept_id%TYPE;
BEGIN
SELECT MAX (repa_id)
INTO l_max_repa_id
FROM x_received_p
WHERE repa_modifieddate <= ADD_MONTHS (SYSDATE, -6);
SELECT MAX (rept_id)
INTO l_max_rept_id
FROM x_received_p_trans
WHERE rept_repa_id = l_max_repa_id;
INSERT INTO x_p_requests_arch
SELECT *
FROM x_p_requests
WHERE pare_repa_id <= l_max_rept_id;
DELETE FROM x__requests
WHERE pare_repa_id <= l_max_rept_id;1006377 wrote:
we are moving between 5 and 10 million records from the one table to the other table and it takes forever.
Please could you provide me with a script just to commit after every x amount of records ? :)I concur with the other responses.
Committing every N records will slow down the process, not speed it up.
The fastest way to move your data (and 10 million rows is nothing, we do those sorts of volumes frequently ourselves) is to use a single SQL statement to do an INSERT ... SELECT ... statement (or a CREATE TABLE ... AS SELECT ... statement as appropriate).
If those SQL statements are running slowly then you need to look at what's causing the performance issue of the SELECT statement, and tackle that issue, which may be a case of simply getting the database statistics up to date, or applying a new index to a table etc. or re-writing the select statement to tackle the query in a different way.
So, deal with the cause of the performance issue, don't try and fudge your way around it, which will only create further problems. -
I run with the Nano and shoe sensor. After I covered 250 miles I got the automated congratualtions, but now after every run I get the recorded message again saying I just covered another 250 miles. Can I reset or stop it without completly resetting everything?
Same issue here. This is really disappointing. I used to look forward to the milestone messages after each run, especially when I was surprised by a celebrity voice. Now, it's the same thing every single time, "Congratulations on another 250 miles. Way to go!" or something along those lines. I was proud of the 250 mile mark, but please...I don't want to hear it every time!
I hope there's some movement on this issue.
~ Heather -
Add a row after every n records
Hi
I have a query that returns only one column
Column1
a
b
c
d
g
e
f
g
h
I want to add 01 as the first row and then after 5 records i want to add 02 then 03 after another 5 records and so on i.e
Column1
01
a
b
c
d
e
02
f
g
h
How can this be done?Hi,
Nice post.
Regards salim.
other solution.
SELECT res
FROM t
model
dimension by( row_number()over(partition by 1 order by rownum) rn)
measures(col1,cast ( col1 as varchar2(20)) as res, count(1)over(partition by 1) cpt,trunc(rownum/5) diff)ignore nav
(diff[for rn from 1 to cpt[1]+trunc((cpt[1]+cpt[1]/5)/5) increment 1]=
case when diff[cv(rn)] is present then diff[cv(rn)]
else case when mod(cv(rn),5)=0 then
diff[cv(rn)-1]+1
else diff[cv(rn)-1]end
end,
res[for rn from 1 to cpt[1]+trunc((cpt[1]+cpt[1]/5)/5) increment 1]=
case when mod(cv(rn),5)=0 then
to_char((cv(rn)/5),'fm00')
else col1[cv(rn)-diff[cv(rn)]]end )
SQL> WITH t AS
2 (SELECT 'a' col1
3 FROM DUAL
4 UNION ALL
5 SELECT 'b'
6 FROM DUAL
7 UNION ALL
8 SELECT 'c'
9 FROM DUAL
10 UNION ALL
11 SELECT 'd'
12 FROM DUAL
13 UNION ALL
14 SELECT 'g'
15 FROM DUAL
16 UNION ALL
17 SELECT 'e'
18 FROM DUAL
19 UNION ALL
20 SELECT 'f'
21 FROM DUAL
22 UNION ALL
23 SELECT 'g'
24 FROM DUAL
25 UNION ALL
26 SELECT 'h'
27 FROM DUAL
28 UNION ALL
29 SELECT 'i'
30 FROM DUAL
31 UNION ALL
32 SELECT 'j'
33 FROM DUAL
34 UNION ALL
35 SELECT 'k'
36 FROM DUAL
37 UNION ALL
38 SELECT 'l'
39 FROM DUAL
40 UNION ALL
41 SELECT 'm'
42 FROM DUAL
43 UNION ALL
44 SELECT 'o'
45 FROM DUAL
46 UNION ALL
47 SELECT 'p'
48 FROM DUAL
49 UNION ALL
50 SELECT 'q'
51 FROM DUAL
52 UNION ALL
53 SELECT 'z'
54 FROM DUAL
55 UNION ALL
56 SELECT 'z'
57 FROM DUAL
58 UNION ALL
59 SELECT 'z'
60 FROM DUAL
61 UNION ALL
62 SELECT 'y'
63 FROM DUAL)
64 SELECT res
65 FROM t
66 model
67 dimension by( row_number()over(partition by 1 order by rownum) rn)
68 measures(col1,cast ( col1 as varchar2(20)) as res, count(1)over(partition by 1) cpt,trunc(rownu
m/5) diff)ignore nav
69 (diff[for rn from 1 to cpt[1]+trunc((cpt[1]+cpt[1]/5)/5) increment 1]=
70 case when diff[cv(rn)] is present then diff[cv(rn)]
71 else case when mod(cv(rn),5)=0 then
72 diff[cv(rn)-1]+1
73 else diff[cv(rn)-1]end
74 end,
75 res[for rn from 1 to cpt[1]+trunc((cpt[1]+cpt[1]/5)/5) increment 1]=
76 case when mod(cv(rn),5)=0 then
77 to_char((cv(rn)/5),'fm00')
78 else col1[cv(rn)-diff[cv(rn)]]end )
79
SQL> /
RES
a
b
c
d
01
g
e
f
g
02
h
i
j
k
03
l
m
o
p
04
q
z
z
z
05
y
26 ligne(s) sélectionnée(s).
SQL> Edited by: Salim Chelabi on 2009-04-15 13:35 -
Need a pipe delimiter after every field in the file on application server
Hi ,
i have to transport data in internal table to a file on application server.
I have done this successfully. But the problem is i have to put a pipe
delimiter after every field in the file on application server.
Could yoe plz help in this issue.
Thanks & Regards
Suresh kumar DHi Should,
I think the below code should solve your problem as i also had a similar type of requirement and this was the code i had used in my program.
FIELD-SYMBOLS: <FS> TYPE ANY.
DATA: L_FLINE TYPE STRING.
Open file for output
M_CHECK_SELSCR_FNMS O1 O.
LOOP AT I_TARGET.
Write records to file
DO.
ASSIGN COMPONENT SY-INDEX OF STRUCTURE I_TARGET TO <FS>.
IF SY-SUBRC EQ 0.
IF SY-INDEX EQ 1.
MOVE <FS> TO L_FLINE.
ELSEIF <FS> NE C_PIPE.
CONCATENATE L_FLINE C_PIPE <FS> INTO L_FLINE.
ELSE.
CONCATENATE L_FLINE <FS> INTO L_FLINE.
ENDIF.
ELSE.
TRANSFER L_FLINE TO W_SRVR_NM_O_O1.
EXIT.
ENDIF.
ENDDO.
ENDLOOP.
Close file
CLOSE DATASET W_SRVR_NM_O_O1.
IF SY-SUBRC EQ 0.
MESSAGE S208(00) WITH TEXT-M02.
ENDIF.
Regards
Sikha -
Firefox 4 become not responding after every few minutes ( mainly 10 minutes). And it came back to its normal state within 10-15 seconds. I reinstall windows and Firefox, but same problem occur. On using chrome 11 and IE 9 I have no problem. I also check that there is no problem on using Firefox 3.
I mainly use Firefox for my web browsing, but due to Firefox 4 I totally disappointed. What should I do for now?You have an add-on installed called Internet Download Manager. It's on Mozilla's Add-on Blocklist @ https://www.mozilla.com/en-US/blocklist/ at the foot of the page.
Uninstall it by clicking the orange Firefox button, go to Add-ons and then remove it in the Extensions menu.
If the problem persists, try running Firefox in [[Safe Mode]]. If it functions properly in that configuration, then one of your other add-ons is the culprit. -
Time stamp and total no of records in ALV
Hi everybody,
i need to display
Time:
Total No of Records:
in my excel report downloaded from ALV.
can any body help me out as how to keep the specified above in my ALV to get in the Exel output.Hi,
For the header - pls use top-of-page event.
For Number of records u can use - Describe table ITAB lines lv_lines. ( declare lv_lines as i). After this statement, your lv_lines will contain the number of records in your final internal table which you are using to display the output.
For the time stamp, just pass the sy-uzeit.
Thanks,
Guru -
Commit after 2000 records in update statement but am not using loop
Hi
My oracle version is oracle 9i
I need to commit after every 2000 records.Currently am using the below statement without using the loop.how to do this?
do i need to use rownum?
BEGIN
UPDATE
(SELECT A.SKU,M.TO_SKU,A.TO_STORE FROM
RT_TEMP_IN_CARTON A,
CD_SKU_CONV M
WHERE
A.SKU=M.FROM_SKU AND
A.SKU<>M.TO_SKU AND
M.APPROVED_FLAG='Y')
SET SKU = TO_SKU,
TO_STORE=(SELECT(
DECODE(TO_STORE,
5931,'931',
5935,'935',
5928,'928',
5936,'936'))
FROM
RT_TEMP_IN_CARTON WHERE TO_STORE IN ('5931','5935','5928','5936'));
COMMIT;
end;
Thanks for your helpI need to commit after every 2000 recordsWhy?
Committing every n rows is not recommended....
Currently am using the below statement without using the loop.how to do this?Use a loop? (not recommended) -
Avoid Commit after every Insert that requires a SELECT
Hi everybody,
Here is the problem:
I have a table of generator alarms which is populated daily. On daily basis there are approximately 50,000 rows to be inserted in it.
Currently i have one month's data in it ... Approximately 900,000 rows.
here goes the main problem.
before each insert command, whole table is checked if the record does not exist already. Two columns "SiteName" and "OccuranceDate" are checked... this means, these two columns are making a unique record when checked together with an AND operation in WHERE clause.
we have also implemented partition on this table. and it is basically partitioned on the basis of OccuranceDate and each partition has 5 days' data.
say
01-Jun to 06 Jun
07-Jun to 11 Jun
12-Jun to 16 Jun
and so on
26-Jun to 30 Jun
NOW:
we have a commit command within the insertion loop, and the each row is committed once inserted, making approximately 50,000 commits daily.
Question:
Can we commit data after say each 500 inserted rows, but my real question is can we Query the records using SELECT which are Just Inserted but not yet committed ?
a friend told me that, you can query the records which are inserted in the same connection session but not yet committed.
Can any one help ?
Sorry for the long question but it was to make u understand the real issue. :(
Khalid Mehmood Awan
khalidmehmoodawan @ gmail.com
Edited by: user5394434 on Jun 30, 2009 11:28 PMDon't worry about it - I just said that because the experts over there will help you much better. If you post your code details there they will give suggestions on optimizing it.
Doing a SELECT between every INSERT doesn't seem very natural to me, but it all depends on the details of your code.
Also, not committing on time may cause loss of the uncommitted changes. Depending on how critical the data is and the dependency of the changes, you have to commit after every INSERT, in between, or at the end.
Regards,
K. -
Calling Delta Merge in DS after every commit
Hi Folks,
I am using an Delta extraction logic in DS to extract large table from ECC (50 Million rows) to the HANA database. The commits in DS job have been configured fopr every 10,000 records. Three questions
1) Should I disable the delta merge in HANA database for this target table prior to the initial load of table. Once the initial load is complete, manually perform the delta merge in HANA is the right approach or
2) Should I be calling manually performing Delta merge in DS job to make sure the table is merged after every commit? If yes how do I call the Delta merge command in DS jobs and how can I do it per commit?
3) Can I invoke Delta merge in DS as part of Delta extraction logic after the initial load is completed in DS?
Any advise will definately be appreciated.
Thanks,
-HariHi Jim
if your big table requires a merge, AUTOMERGE will pick it up. The mergedog process checks it every 60 seconds, so that should be alright for your requiremen.
If the table doesn't need to be merged, it won't.
Manually handling the delta merge is a fine-tuning action that is most often not required or recommendable.
- Lars
Maybe you are looking for
-
Downloads folder missing after system restore from Time Machine
I installed a new HDD, installed Leopard, and the did a system restore from Time Machine. Everything seems to work fine except from a big question mark in the dock, where downloads folder should be. Any suggestions pls?
-
Blackberry Calendar duplicates entries when receiving invites
When some (not all) invites are sent to me by email, my Blackberry records the invite as soon as the invite is received (not when it is accpeted), and then when I accept the invite in my Outlook (2010) software, and sync, the calendar entry duplicate
-
Problems in a Workflow..
Hi all.... I am new to Workflow.. I have tried one example given in saptechnical site under /Tutorials/Workflow/WorkflowTutorialOne/WorkFlowTutorial1 Now when I execute the workflow & check the Inbox it shows me an mail. when I execute that workflow
-
Error in OSD Task from Windows XP to Windows 7
Hi All, Recently I installed CM2012 R2 and I am doing functional testing migration of operating systems, I have a specific problema when trying to migrate from Windows XP to Windows 7, the steps like capture settings in Windows, capture settings netw
-
Adobe Elements 11 Editor: The MARQUEE for the brush tips often disappears!
Adobe Elements 11 Editor: The MARQUEE for the brush tips aften disappears! The only solution that I have found is to shut the computer down then start up again. Is there a fix for this problem without shutting everything down?