Split SQR output into multiple files And Rename each file.
Greetings!
We ran into scenario where we need to split an SQR output (PDF) split into multiple files (each page , each file). More over the out put has to be renamed with EMPLID.
It is for printing Advices, where we run DDP003 to print all of them, Our requirement needs to have one PDF file for each employee to placed into a shared drive OR to be emailed.
I was thinking of running the same process in a loop individually, but this is resource intensive. Is there any way we can batch run this SQR process and split into each page (each paychek comes to one page),
AND
I need them renamed by SQR as it creates them, Each file would be names as EMPLID.
This is all in alternative to run process individually by employee , via self service. to not to over laod Process scheduler server.
Can anyone help me how can we get these thigns done in SQR?
Thanks in advance.
You can use the SQR command new-report for each employee which closes the current report output file and opens a new one with the specified file name (e.g. EMPLID).
Similar Messages
-
Split XSLT Output into Multiple Files
I have an XML-to-File scenario working, but now I need to split my XSLT map output into multiple files based on the data. I have been reading the Jin Shin blog on message splitting, but don't know that it pertains to my situation.
XML data getting mapped with XSLT map creates output formatted like this.
<?xml version="1.0" encoding="utf-8"?>
<ns1:ColdInvoiceData xmlns:ns1="http://graybar.com/cold/invoice">
<Header>
<RecordID>HDR</RecordID>
<InvoiceNumber>15</InvoiceNumber>
</Header>
<Details>
<RecordID>DTL</RecordID>
<LineItemNumber>001</LineItemNumber>
<UnitPrice>1.25</UnitPrice>
</Details>
<Details>
<RecordID>DTL</RecordID>
<LineItemNumber>002</LineItemNumber>
<UnitPrice>2.22</UnitPrice>
</Details>
<Header>
<RecordID>HDR</RecordID>
<InvoiceNumber>16</InvoiceNumber>
</Header>
<Details>
<RecordID>DTL</RecordID>
<LineItemNumber>001</LineItemNumber>
<UnitPrice>3.33</UnitPrice>
</Details>
</ns1:ColdInvoiceData>
I currently have this output writing to a file (FTP, File Conversion). A single file is no issue, but I need to send multiple files for every set of HDR/DTL(s). I also need to put the invoice number in the filename (which is working fine as a parameter in my single FTP File CC now).
Can I make this happen with message splitting and maybe a second map (GUI map)? Do I need to adjust the XSLT output XML format to have an invoice level? Is there a better way to go?
Thanks!I made a change to the namespace used on the Messages/Message1 nodes (since the system-assigned ns0 was already being used in my xml data) and now I am getting output.
The problem is that the output matches my input. As aforementioned, I started with just a one-to-one mapping on every node and field.
When I change the mapping to try to force multiple ColdInvoiceData nodes, I get the following error (when my source has two Invoice nodes):
<Trace level="1" type="T">com.sap.aii.utilxi.misc.api.BaseRuntimeException: RuntimeException in Message-Mapping transformation: Cannot produce target element /ns0:Messages/ns0:Message1/ns1:ColdInvoiceData[2]/Invoice. Check xml instance is valid for source xsd and target-field mapping fulfills requirements of target xsd at
When my source has one invoice node, it works ok.
Here is a screenshot of my mapping structure.
http://webpages.charter.net/kpwendel2/ib.jpg -
How to do search with multiple texts across documents and rename the file with found text?
Hello:
I'm trying to do the batch search across the multiple documents and rename the file (or save as) after the found word?
In example:
I have many unique texts and would want to search across the multiple documents.
If a document is found with that unique text then, the document is either renamed or save as with that unique text.
So, I could know what unique text that file holds.
How do I do that?
Let me know.
ThanksWelcome to the forum!
When you want to post a block of code, you can enclose it with the mark ups { code }
That is the key word code surrounded by curly brackets, but without the spaces
You seem to be running a very old (and unsupported release of the database)
7.3 has not been a current release for about 10 years.
It's probably been that long since I've used this technique, but i think it should work.
You should consider welcoming your system to the 21st century by upgrading to a supported release ;-)
If you used split to chop up your export file, use cat or dd to reassemble it.
So, something like this:
mknod bk.dmp p
cat xaa xab xac xad xae xaf xag xah xai > bk.dmp &
imp SYSTEM/$PASSWD parfile=imp_bk.parfile
rm bk.dmp
$ cat imp_bk.parfile
file=bk.dmp
log=imp.log
full=y
buffer=1048576
ignore=y
commit=y let us know if still have problems.
Good Luck! -
Split single row into multiple rows containing time periods
Hi,
I have a table with rows like this:
id, intime, outtime
1, 2010-01-01 00:10, 2010-01-3 20:00
I would like to split this row into multiple rows, 1 for each 24hr period in the record.
i.e. The above should translate into:
id, starttime, endtime, period
1, 2010-01-01 00:10, 2010-01-02 00:10, 1
1, 2010-01-02 00:10, 2010-01-03 00:10, 2
1, 2010-01-03 00:10, 2010-01-03 20:00, 3
The first starttime should be the intime and the last endtime should be the outtime.
Is there a way to do this without hard-coding the 24hr periods?
Thanks,
Dan Scott
http://danieljamesscott.orgThanks for all the feedback, Dan.
It appears that the respective solutions provided will give you: a) different resultsets and b) different performance.
Regarding your 'truly desired resultset' you haven't answered all questions from my previous post (there are differences in the provided examples), but anyway:
I found that using CEIL or ROUND makes quite a difference, using my 'simple 3 record testset' (30 records vs. 66 records got initially returned, that's less than half of the original). That's quite a difference. However, I must call it a day (since it's almost midnight) for now, so there's room for more optimizement and I haven't thoroughly tested.
But this might hopefully make a difference performancewise when compared to my previous 'dreaded example':
SQL> drop table t;
Table dropped.
SQL> create table t as
2 select 1 id, to_date('2010-01-01 00:10', 'yyyy-mm-dd hh24:mi') intime, to_date('2010-01-03 20:00', 'yyyy-mm-dd hh24:mi') outtime from dual union all
3 select 2 id, to_date('2010-02-01 00:10', 'yyyy-mm-dd hh24:mi') intime, to_date('2010-02-05 20:00', 'yyyy-mm-dd hh24:mi') outtime from dual union all
4 select 3 id, to_date('2010-03-01 00:10', 'yyyy-mm-dd hh24:mi') intime, to_date('2010-03-03 00:10', 'yyyy-mm-dd hh24:mi') outtime from dual;
Table created.
SQL> select id
2 , max(intime)+level-1 starttime
3 , case
4 when level = to_char(max(t.outtime), 'dd')
5 then max(t.outtime)
6 else max(t.intime)+level
7 end outtime
8 , level period
9 from t
10 connect by level <= round(outtime-intime)
11 group by id, level
12 order by 1,2;
ID STARTTIME OUTTIME PERIOD
1 01-01-2010 00:10:00 02-01-2010 00:10:00 1
1 02-01-2010 00:10:00 03-01-2010 00:10:00 2
1 03-01-2010 00:10:00 03-01-2010 20:00:00 3
2 01-02-2010 00:10:00 02-02-2010 00:10:00 1
2 02-02-2010 00:10:00 03-02-2010 00:10:00 2
2 03-02-2010 00:10:00 04-02-2010 00:10:00 3
2 04-02-2010 00:10:00 05-02-2010 00:10:00 4
2 05-02-2010 00:10:00 05-02-2010 20:00:00 5
3 01-03-2010 00:10:00 02-03-2010 00:10:00 1
3 02-03-2010 00:10:00 03-03-2010 00:10:00 2
10 rows selected.
SQL> By the way: I'm assuming you're on 10g, is that correct?
Can you give us some information regarding the indexes present on your table? -
Need to split the output into files
Hi,
I have a query regarding splitting the output into different files. Please help to resolve that.
I have have select query query...
SELECT INDEX_NAME FROM DBA_INDEXES WHERE TABLE_NAME=<Table Name>;
If it returns less than 4 indexes then we have create one table and have to move those into the files.
For example...
File_1.sql
====
index1
index2
index3
If select statement returns more than 4 indexes then we have create 4 files and have to splict those indexes and has to move to those 4 files.
For example....
If select statement returns 13 records then...
File_1.sql File_2.sql File_3.sql File_4.sql
===== ====== ======= =======
index1 index4 index7 index10
index2 index5 index8 index11
index3 index6 index9 index12
index13
Index no need to be in order in any file and any file can I extra index in it.
Can be ok if we have any procedure or shell script. Please help me on this?
We are using 10.2.0.1 oracle db. Please let me know if you need any thing else?
Thanks
PathanAre you trying to put the output from SQL reports in different files?
Some reporting tools can do this.
You have a couple of options to do this.
Some reporting tools support the functinality to write to different files.
Another way is to write a PL/SQL procedure using UTL_FILE, which can open multiple files based on your conditions and to write to them as you need to.
An older, less elegant solution is to write nested SQL*PLUS scripts spooling to different files with the queries you need. A top level script would invoke the others, something like (untested)
--in first script
@subscript1
@subscript2
--in subscript1.sql
spool whatever.lst
select *
from dual;
... -
Split column into multiple text and number columns
I'm trying to figure out how to split this column into multiple columns with power query. One column for the company name/person name, one for the address, one for the zip. Some of the addresses have a three or four digit code before the address, which
I would like in its own column too. It's the 170 on the lastname, firstname line. Does anyone have any pointers on this? I'm familiar with PQ advanced editor, but struggling with this one.
COMPANY INC. 195 MAIN ST MYCITY ST 12345
LASTNAME, FIRSTNAME 170 477 ANY STREET CIRCLE MYCITY ST 12345
Thanks for your help!HI Gil,
We have column with more than one numbers separated by space or comma or semicolon.
We need to add the row for each number by keeping all other column value same.
Here is a original table
Col1
Col2
Col3
A
B
11 22,33 44; 55
C
D
10 20
and expected output should be
Col1
Col2
Col3
A
B
11
A
B
22
A
B
33
A
B
44
A
B
55
C
D
10
C
D
20
Please let us know the best way to solve this... -
Split Single IDOC into Multiple IDOC's Based on Segment Type
Hi Experts,
I have a scenario IDOC to FILE , Split Single IDOC into Multiple IDOC's based on Segment Type
Outbound:
ZIdocName
Control Record
Data Record
Segment 1
Segment 2
Segment 3
Status Record
I should get output like below
Inbound:
ZIdocName
Control Record
Data Record
Segment 1
Status Record
ZIdocName
Control Record
Data Record
Segment 2
Status Record
ZIdocName
Control Record
Data Record
Segment 3
Status Record
Please suggest me step by step process to achieve this task.
Thanks.Thanks a lot Harish for reply.
I have small doubt. According to your reply , If we have N number of segments in single IDOC with same fields in all segments then for splitting Single IDOC into Multiple IDOC's based on Segment Type we need to duplicate N number of target IDOC tree structure.
Is that possible to Split single IDOC into Multiple IDOC's based on Segment Type using only one Target IDOC structure without duplicating the Target IDOC structure tree. -
Export XML data into multiple worksheet of an Excel file..using FO processr
Hi,
I need to export XML data into Excel output, the data should flow into multiple worksheet of the Excel file.
Let me know if this can be done using XML publisher. If yes, please provide me the steps to do the same.
Could not able to achieve this through by the below process:
(1) Created a RTF (which has single excel table structure).
(2) Generated the XSL file using XSL-FO Style Sheet.
(3) Passed the XSL file and XML
which exported the data into an Excel (single worksheet) format.
Please let me know, how this can be exported into multiple worksheets.
Thanks & Regards,
Dhamodaran VJ.Hi Dhamodaran ,
pass me the template you created and XML. "Created a RTF (which has single excel table structure)."
Let me have a look at it,
For ID, look at profile. -
Split the IDOC into multiple IDOC if the IDOC has more than 500 records
Hi All,
I developed an outbound IDOC in which we are facing an issue.
There is some limitation on the maximum idoc size it can handle.
If number of records is more than 500, split the idoc into multiple iDoc's, e.g. if it would have 1300 records , the result would be 2 iDoc's with 500 records, and the last one would have 300 records
How can i acheive this.
Regards
JaiHi,
1) first you need to know which message type/Idoc type you are triggering.
2) Get the Corresponding processcode from Partner profiles(WE20/ WE41).
3) Then the look for prper user-exit in the related processing FM.
4) write logic to split the IDoc accordingly.
if no proper user exit available then copy the standard processing FM and need to all ALE related configurations.
Catch hold any ABAP expert in your team to do all these.
Suresh -
Split a record into multiple records
Hi,
I have situation where i need to split a record into multiple records.
InputData :
value|BeginDate |EndDate
15 |2002/10/15|2002/10/16
13 |2002/10/13|2002/10/20
19 |2002/10/19|2002/10/23
10 |2002/10/10|2002/10/12
OutPut :
10 |2002/10/10|2002/10/12
13 |2002/10/13|2002/10/15
15 |2002/10/15|2002/10/16
13 |2002/10/16|2002/10/19
19 |2002/10/19|2002/10/23
ThanksHi ,
As a far I understood from your example ,
I have few questions...
1. You have information about the patient in a 1 source table.
2. how u are identifying for patient X u need 5 rows to be created. R u storing these informations in seprate table or how ...
3. if you have these information in a seperate tables ..... a simple cross join in ODI should get you the expected result.....
Or give some more information with a example ..that would be great ...
Thanks -
Copy all files and renaming them
Hi
My SQL script needs to copy all backup files in a folder to another one while renaming them at the same time by adding the time stamp like this: Originalfilename[Timestamp].bak
Basically the script looks like this:
declare
@cmdstring varchar(1000)
declare
@filenamestr varchar(100)
set
@filenamestr =
CURRENT_TIMESTAMP
set
@cmdstring =
'copy \\Webserver\BackupStorage\SQLbackup\*.*
\\Webserver\BackupStorage\SQLbackupArchive\'
exec
master..xp_cmdshell
@cmdstring – this part is not complete
What code needs to be added to rename each file during the copy process?
PatrickSET NOCOUNT ON;
declare @sourcepath varchar(1000) = 'C:\temp\',
@destinationpath varchar(1000) = 'C:\MyFiles\'
CREATE TABLE #FileList
FileID INT IDENTITY(1, 1)
,Line VARCHAR(512)
CREATE TABLE #temp
isFileThere BIT
,isDirectory BIT
,parentDirExists BIT
CREATE TABLE #output
result varchar(500)
DECLARE @Command VARCHAR(1024)
, @RowCount INT
, @counter INT
, @FileName VARCHAR(1024)
, @FileExists BIT
SET @Command = 'dir '+@sourcepath+' /A-D /B'
INSERT #FileList
EXEC master.dbo.xp_cmdshell @Command
DELETE FROM #FileList
WHERE Line IS NULL
SELECT @RowCount = COUNT(*)
FROM [#FileList]
SET @counter = 1
WHILE ( @counter <= @RowCount )
BEGIN
SELECT @FileName = [Line]
FROM [#FileList]
WHERE [FileID] = @counter
SET @Command = 'copy /-Y
'+@sourcepath+'*.txt '+@destinationpath
insert into #output
EXEC master.dbo.xp_cmdshell @Command
SET @Command = 'REN
'+@destinationpath+''+ @FileName +' '+ LEFT(@FileName,CHARINDEX('.',@FileName)-1)+'_rename.'+SUBSTRING(@FileName,charindex('.',@Filename)+1,len(@FileName))
print @command
SET @counter = @counter + 1
IF LEN(@Command) > 0
EXEC master.dbo.xp_cmdshell @Command
END
DROP TABLE #output
DROP TABLE #FileList
DROP TABLE [#temp] -
Split one row into multiple columns
Hi,
Data in one CLOB column in a table storing with delimiter, ##~~##. Ex. ##~~##abc##~~##defgh##~~##ijklm##~~##nopqr (data starts with delimiter). Please help me to split the data into multiple rows like below and it should be in the same order.
abc
defgh
ijklm
nopqr
I am using Oracle 11g.
Thanks.Thanks Hoek for your response. Before posting my question in the forum, I tried similar query. It is working with one character as delimiter.
with test as (select 'ABC,DEF,GHI,JKL,MNO' str from dual )
select regexp_substr (str, '[^,]+', 1, rownum) split
from test
connect by level <= length (regexp_replace (str, '[^,]+')) + 1;
Above query is giving correct result by fetching 5 rows. I have modified the query like below...
with test as (select 'ABC,,,DEF,,,GHI,,,JKL,,,MNO' str from dual )
select regexp_substr (str, '[^,,,]+', 1, rownum) split
from test
connect by level <= length (regexp_replace (str, '[^,,,]+')) + 1;
Above query resulting 13 rows and last 8 rows are nulls. Number of null rows are increasing, if I increase number of characters in delimiter. Could you please tell me how to avoid those null rows.
Thanks. -
Everytime I open iTunes lately...I get all my music thrown out of the library and this annoying message come up...The file “iTunes Library.itl” does not appear to be a valid iTunes library file. iTunes has created a new iTunes library and renamed this file to “iTunes Library (Damaged) 4”. Is this how an apple works? It doesn't? Why all of a sudden is it doing this? Please help. Thank you.
DavidDid you happen to bring this iTunes library over from a windows machine? You might read this kb article to see if it helps http://support.apple.com/kb/HT1451
-
My itunes library suddenly that the file “iTunes Library.itl” does not appear to be a valid iTunes library file. iTunes has created a new iTunes library and renamed this file to “iTunes Library (Damaged)”. I use an external hard drive for my Itunes.
The library file appears to have been corrupted. You say you use your external drive but is the whole library on it or only your media? No matter, you will have to rebuild the library file:
iTunes: How to re-create your iTunes library and playlists - http://support.apple.com/kb/ht1451 -
How to read a text file and write text file
Hello,
I have a text file A look like this:
0 0
0 A B C
1 B C D
2 D G G
10
1 A R T
2 T Y U
3 G H J
4 T H K
20
1 G H J
2 G H J
I want to always get rid of last letter and select only the first and last line and save it to another text file B. The output should like this
0 A B
2 D G
1 A R
4 T H
1 G H
2 G H
I know how to read and write a text file, but how can I select the text when I am reading a text file. Can anyone give me an example?
Thank youIf the text file A look like that
0 0
0 3479563,41166 6756595,64723 78,31 1,#QNAN
1 3479515,89803 6756588,20824 77,81 1,#QNAN
2 3479502,91618 6756582,6984 77,94 1,#QNAN
3 3479516,16334 6756507,11687 84,94 1,#QNAN
4 3479519,14188 6756498,54413 85,67 1,#QNAN
5 3479525,61721 6756493,89255 86,02 1,#QNAN
6 3479649,5546 6756453,21824 89,57 1,#QNAN
1 0
0 3478762,36013 6755006,54907 54,8 1,#QNAN
1 3478756,19538 6755078,16787 53,63 1,#QNAN
2 0
3 0
N 0
I want to read the line that before and after 1 0, 2 0, ...N 0 line to arraylist. I have programed the following code
public ArrayList<String>save2;
public BufferedWriter bufwriter;
File writefile;
String filepath, filecontent, read;
String readStr = "" ;
String[]temp = null;
public String readfile(String path) {
int i = 0;
ArrayList<String> save = new ArrayList <String>();
try {
filepath = "D:\\thesis\\Material\\data\\CriticalNetwork\\test3.txt";
File file = new File(filepath);
FileReader fileread = new FileReader(file);
BufferedReader bufread = new BufferedReader(fileread);
this.read = null;
// read text file and save each line content to arraylist
while ((read = bufread.readLine()) != null ) {
save.add(read);
// split each arraylist[i] element by space and save it to String[]
for(i=0; i< save.size();i++){
this.temp = save.get(i).split(" ") ;
// if String[] contain N 0 such as 0 0, 1 0, 2 0 then save its previous and next line from save arraylist to save2 arraylist
if (temp.equals(i+"0")){
this.save2.add(save.get(i));
System.out.println(save2.get(i));
} catch (Exception d) {
System.out.println(d.getMessage());
return readStr;
My code has something wrong. It always printout null. Can anyone help me?
Best Regards,
Zhang
Maybe you are looking for
-
Weird Problem -- Mouse Freezes when I plug in my camera
Hey guys, I've been combing the forums and have found some similar issues reported but no solutions to speak of. Here's the deal: My camera -- Minolta Dimage A200 -- I've had it for 2 months now and have connected it to my iMac G4 800 mhz dozens of t
-
Hi, I was wondering if anyone has any idea on why JCONTROL.exe does not start. I am currently installing NW04s EP7. The DB is on another host and te SCS and CI are on a different host. I am going throught the installation when I get the following err
-
can you stream music from a cd over the airport express from itunes without ripping tracks into a playlist
-
Can I load the information from my iCloud account that was backed up earlier this year to a new device since my iPad was stolen?
-
Why does my PSE 12 Editor, open then close immediately saying there is an error and Windows must close. This program is basically worthless to me right now and I have not had it very long. How can I resolve this problem? Thanks