Using todays date to group data

I have a cross tab report which shows totals for the past five financial years.  A financial year for us goes from 1st April to 31st March.  At the moment I am using specified order and have defined five groups with date parameters.  When we get to April next year I will have to update these parameters, to create the new finacial year group and delete the oldest group.
I have managed to create a parameter based on the printdate filed to get the various start and end dates but these do not appear to be acceptable as input to the selection for the group.
Can anyone advise me if this is possible and what I need to do to my parameter to make it work?
Thanks

The data is a series of payments made over a number of years by a number of groups ( I work for a charity).  I want to see how much each group has given in each of the past five financial years.
At present I'm using a cross tab report with the names of the groups in the vertical axis and the past five years in the horizontal axis.
Because I want financial years rather than calendar years I can't just group on date for each year so I have created five groups.  for example financial year 2009 is anything with a date between 1/4/2009 and 31/3/2010 (Uk dates DD/MM/YYYY)
I want to replace the hard coded year with a year based on the current date so I don't have to update this report each year.  The cross tab expert won't let me put a formula name in. 
is there another way to do it?
Thanks for helping
Diane

Similar Messages

  • Data Merge: Grouping data under a shared section title

    I am a total newb to using data merge in indesign. I watched the "Automating a Catalog with Data Merge" video and found it very helpful. I am creating a catalog for an auto parts company so I have a csv file with a field for Make, Model, Year Range, Part Number. I use Make as a header and put the other information below it. I do not want it to repeat Make (Honda) over and over.
    Rather than:
    Honda
    Accord         2005-2010     Part Number 4214-0100
    Honda
    Civic          2005-2010     Part Number 4214-0100
    Honda
    Odyssey     2005-2010     Part Number 4214-0100
    I would like it to display:
    Honda
    Accord         2005-2010     Part Number 4214-0100
    Civic          2005-2010     Part Number 4214-0100
    Odyssey     2005-2010     Part Number 4214-0100
    Can someone please give me some advice?
    Thanks!

    P Spier wrote:
    JonMedford wrote:
    I have tried this method but and I think this would suffice but it still doesnt format nicely. It puts too much space inbetween each row of data and not enough space between the sections I want to separate. I suppose I could do this by brute force once I get all of my data in, but this will be dreadful to do for a multiple hundred page catalog. Setting the merge to delete blank lines appears not to work. Not sure if this is a bug or not.
    A real catalog plugin is probably the way to go, but if you stick with Data Merge you can control spacing with paragraph styles, most likely. Apply a style with Space Before added to the paragraph containing the section.
    As far a removing blank lines, if the line isn't being removed, it probably means it isn't really blank. Blank means there can be NOTHING on the line except a field, and the field value in that record is null. If you have punctuation or white space of any sort on the line in additon to the field tag, it's never going to be blank, so you should be doing whatever concatenation you need in the data file first, as John mentioned.
    I might just do that. I am staring at this 138 page manual for the Smart Catalog plugin and trying to use the darn thing with very little success. There is not GUI formatting available and it is really frustrating.

  • Invoice date always use todays date

    Hello!
    Is there a user exit we can use to always set invoice date = today? In VF01 or VF04 is it also possible to set system to always use todays date?
    Regards

    Hi,
    This Data Requirement routine works fine with order to billing also.  Check in VTFA for the relevant item category which Data VBRK/VBRP routine is assigned.  If you have assigned any customer developed routine other than the standard one, try to incorporate the validation of Billing Date = System Date as like Standard Routine 11 = Billing = Today's Date.  This is the simplest way we had achieved for order related billing.
    Hope this helps.
    Thanks
    Krishna.

  • How to group data and assign cell names using Excel templates

    Hi all,
    reading the article "Real Excel Templates 1.5" on the Tim Dexter's Blog, I found that I need hierarchical data for Excel templates. So only in this way I can group my data.
    My hierarchy is composed by 3 levels:
    lev 1 DESTINATION: is the higher level that groups SERVICES and COUNTRY
    lev 2 SERVICES: is the level that groups the countries
    lev 3 COUNTRY: is the lowest level with the COUNTRY, CALLS and CALLS_MINUTES details
    An example of my hierarchy is this:
    lev 1 INTERNATIONAL
    lev 2 INTERNATIONAL FIXED
    lev 3 Albania 90 438,15
    lev 3 Armenia 1 16,95
    lev 2 INTERNATIONAL MOBILE
    lev 3 Albania Mobile 161 603,35
    lev 3 Australia Mobile 6 34,38
    lev 1 NATIONAL
    lev 2 HELLAS LOCAL
    lev 3 Hellas Local 186,369 707940,6
    lev 2 HELLAS MOBILE
    lev 3 Hellas Mobile Cosmote 31,33 43856,97
    lev 3 Hellas Mobile Q-Telecom 2,398 4343,78
    lev 2 HELLAS NATIONAL
    lev 3 Hellas Long Distance 649 1499,55
    lev 1 INTERNET
    lev 2 INTERNET CALLS
    lev 3 Cosmoline @Free 79 2871,3
    So, my data template is the following (with exactly the hierarchy I want for my data):
    <dataTemplate name="emp" description="destinations" dataSourceRef="GINO_DB">
         <dataQuery>
              <sqlStatement name="Q1">
                   <![CDATA[SELECT 1 TOTAL_CALLS, 2 TOTAL_CALLS_MIN from dual ]]>
              </sqlStatement>
              <sqlStatement name="Q2">
                   <![CDATA[SELECT dest.ID_DESTINATION, dest.DESC_DEST from ale.AAA_DESTINATION dest order by dest.ID_DESTINATION ]]>
              </sqlStatement>
              <sqlStatement name="Q3">
                   <![CDATA[SELECT ser.ID_SERVICE,
    ser.ID_DEST,
    ser.DESC_SERVICE,
    count.ID_COUNTRY,
    count.ID_SERV,
    count.COUNTRY,
    count.CALLS,
    count.CALLS_MIN
    from ale.AAA_SERVICE ser, ale.AAA_COUNTRY count
    where ser.ID_SERVICE= count.ID_SERV
    and ID_DEST = :ID_DESTINATION
    order by ser.ID_SERVICE ]]>
              </sqlStatement>
         </dataQuery>
         <dataStructure>
              <group name="G_TOT" source="Q1">
                   <element name="TOTAL_CALLS" value="G_COUNTRY.CALLS" function="SUM()"/>
                   <element name="TOTAL_CALLS_MIN" value="G_COUNTRY.CALLS_MIN" function="SUM()"/>
                   <group name="G_DEST" source="Q2">
                        <element name="DESC_DEST" value="DESC_DEST"/>
                        <element name="DEST_CALLS_SUBTOTAL" value="G_COUNTRY.CALLS" function="SUM()"/>
                        <element name="DEST_CALLS_MIN_SUBTOTAL" value="G_COUNTRY.CALLS_MIN" function="SUM()"/>
                        <group name="G_SERV" source="Q3">
                             <element name="DESC_SERVICE" value="DESC_SERVICE"/>
                             <element name="SERV_CALLS_SUBTOTAL" value="G_COUNTRY.CALLS" function="SUM()"/>
                             <element name="SERV_CALLS_MIN_SUBTOTAL" value="G_COUNTRY.CALLS_MIN" function="SUM()"/>
                             <group name="G_COUNTRY" source="Q3">
                                  <element name="COUNTRY" value="COUNTRY"/>
                                  <element name="CALLS" value="CALLS"/>
                                  <element name="CALLS_MIN" value="CALLS_MIN"/>
                             </group>
                        </group>
                   </group>
              </group>
         </dataStructure>
    </dataTemplate>
    Not considering the CALLS and CALLS_MIN details (I focused only on the COUNTRY which is as the same level), with this data template, making tests on my excel template, I noticed that I can group ONLY two nested levels using the format XDO_GROUP_?group_name?
    XDO_GROUP_?G_DEST?
    XDO_GROUP_?G_SERV?
    or
    XDO_GROUP_?G_DEST?
    XDO_GROUP_?G_COUNTRY?
    or
    XDO_GROUP_?G_SERV?
    XDO_GROUP_?G_COUNTRY
    If I try to group all the three level together in this order
    XDO_GROUP_?G_DEST?
    XDO_GROUP_?G_SERV?
    XDO_GROUP_?G_COUNTRY
    I don't have the output I would like to have.....
    Practically, in my excel I have 3 rows with the following labels
    DESTINATION (called XDO_?DESC_DEST? - =Sheet1!$A$3
    SERVICE (called XDO_?DESC_SERVICE? - =Sheet1!$A$4
    COUNTRY (called XDO_?COUNTRY? - =Sheet1!$A$5)
    where
    XDO_GROUP_?G_DEST? (=Sheet1!$A$3:$B$5)
    XDO_GROUP_?G_SERV? (=Sheet1!$A$4:$B$5)
    XDO_GROUP_?G_COUNTRY     (=Sheet1!$A$5:$B$5)
    I noticed that if I don't use the last one (XDO_GROUP_?G_COUNTRY), my output is correct even if I don't have more than one country for each service....As soon as I put XDO_GROUP_?G_COUNTRY....I loose all the 2nd level and the most of times the 3rd level too....
    So...I think that the problem is how I choose the excel cells when I assign the XDO_GROUP_?group_name?
    Anybody had made some tests, or can help me ???? I'm becoming crazy.....
    Any help will be appreciated
    Thanks in advance
    Alex

    But how can I use tags XDO_GROUP_?? to group data correctly using hierarchial xml, I don't want to use flat XML.
    Yep, I using Template Builder in Excel to run reports localy, and output is wrong
    It's seems that groups couldn't define the level of nesting, I think...
    How can I write it in XDO_METADATA sheet?
    Though I have hierarchial XML and groups should define nesting level correctly.
    I have no clue.....

  • Date column group by 15 minute interval using SQL

    I am using Oracle 9i DB, I have a large table containing several rows with following example data. Each row identifies a transaction.
    Transaction_id, status, completed_date
    1667050 SUCCEEDED 4-Dec-03 00:00:44
    1667091 SUCCEEDED 4-Dec-03 00:05:45
    6670930 SUCCEEDED 4-Dec-03 00:09:46
    4359066 SUCCEEDED 4-Dec-03 00:10:46
    this table consists of rows for a 24 hour day period.
    I need to write a SQL query to generate a report to give me a count of transactions 24 hour period grouped into 15 minute or half hour period.
    for example
    count time_interval
    5000 00:00:00 - 00:14:59
    2345 00:15:00 - 00:29:59
    and so on
    I am new to SQL and so am having a hard time figuring out which function or how to group date column to achieve the end result. I would be grateful if someone could throw some suggestions.

    This should help.
    http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:4222062043865

  • Group data locked error for MM01 using parallel processing

    Hello gurus,
                       I am using Call txn method (MM01) Parallel Processing method ( around 9 threads ). The Materials are getting locked around 10 percent sometimes.
    This is happening randomly ..one day i dont have any locking errors ..next day i have ...Any ideas why this could be..any prereq i need to check before executing the parallel processing..
    Thank you in advance..
    sasidhar p

    Hi Sasidhar
    I guess you are either extending the Sales Data or MRP Data. Just make sure that you are processing these transactions in a linear form for a single material. We can use parallel processing for different materials but for a single material if we go for parallel processing we can definetely expect the Lock Objects error.
    Kind Regards
    Eswar

  • Data Acquisition - using local variables to write data to a file

    Hello,
    I am running a Data Acquisition vi (currently in LabVIEW 7.1 but soon to be updated to 8.2) that collects ~100 parameters of data from several sources contained in a while loop. The current configuration (which I did not write) uses very few subVIs and writes to ~100 local variables to store each parameter. It then reads all the local variables and builds an array of all the strings, converts then to a spreadsheet string, then uses the write characters to file function to append to a datafile. I am trying to clean things up and have came up with subVIs to collect the data from the following sources:
    8 serial port sources collecting btwn 8 and 20 parameters each
    ~15 thermocouple readings
    ~10 analog inputs
    ~20 parameters read off an ARINC 429 bus.
    I have come up with a subVI to read each of the sources and have placed the subVIs in the while loop. Each subVI outputs the data that it collects in array or cluster form. I was wondering how best to write each parameter to a CSV file at between 1 and 10 Hz. Should I write each subVI output to a LV and then read them off as was done before (the difference being that I have reduced the # of LVs to ~10 vs >100?
    I should add that precise timing is not that important, so if all the subVIs are not collecting simultaneously (which I understand that they won't be), it does not really matter.
    Thanks.

    Hi jilla,
    jilla wrote:
    What I think that you are saying is to turn the outputs of the 4 subVIs into inputs of a 5th subVI that writes to the data file. Correct?
    Yes.  It may sound like a fine-point, but I beileve it's better to create a VI specifically for formatting data - in your example, 4 arrays IN, a single string OUT.  Then write the string to file as a seperate operation.  GUI-displayed data can go through a similar transformation, the four arrays wired to a subVI which builds output-structures specifically for display.  It's a beginner's mistake to put lots of individual controls and indicators on the screen when groups of them are naturally related (in an object-oriented sense.)  Use clusters to group related controls - this will keep the diagram much cleaner.
    One more question: at what point (either # of data points or frequency of data collection) does it become necessary to use queues? Thanks.
    Well, there's not really a clearly definable "point".  I'd say if your update-rate climbs above 100Hz, or you witness poor program or system performance, then it's time.  The scenario you've described is a fairly simple acquire/display&log loop - and simple is good.   Then-again people can't see/react-to updates faster than about 10Hz - so it doesn't make sense to sacrifice performance - if performance becomes an issue.
    Re: queues:  Queues are sometimes used to buffer data that's "produced" in one place and "consumed" in another.
    Here, if/when logging data, you're logging with every DAQ.  I wouldn't recommend using a queue to transport data from a "DAQ loop" to a "Logging-loop" - those functions can be in the same loop.  Should/could a queue be used to get data from a "DAQ loop" to update the GUI at a lower frequency?  Sure, but a Notifier might be a better choice.   Further, in the (simple?) program you've described, you might use a case structure (True/False) to only update FP indicators every "X" iterations - a simple solution that doesn't require Queues or Notifiers.
    Cheers!
    "Inside every large program is a small program struggling to get out." (attributed to Tony Hoare)

  • Use LINQ to extract the data from a file...

    Hi,
    I have created a Subprocedure CreateEventList
    which populates an EventsComboBox
    with a current day's events (if any).
    I need to store the events in a generic List communityEvents
    which is a collection of
    communityEvent
    objects. This List needs to be created and assigned to the instance variable
    communityEvents.
    This method should call helper method ExtractData
    which will use LINQ to extract the data from my file.
    The specified day is the date selected on the calendar control. This method will be called from the CreateEventList.
    This method should clear all data from List communityEvents.  
    A LINQ
    query that creates CommunityEvent
    objects should select the events scheduled for selected
    day from the file. The selected events should be added to List
    communityEvents.
    See code below.
    Thanks,
    public class CommunityEvent
    private int day;
    public int Day
    get
    return day;
    set
    day = value;
    private string time;
    public string Time
    get
    return time;
    set
    time = value;
    private decimal price;
    public decimal Price
    get
    return price;
    set
    price = value;
    private string name;
    public string Name
    get
    return name;
    set
    name = value;
    private string description;
    public string Description
    get
    return description;
    set
    description = value;
    private void eventComboBox_SelectedIndexChanged(object sender, EventArgs e)
    if (eventComboBox.SelectedIndex == 0)
    descriptionTextBox.Text = "2.30PM. Price 12.50. Take part in creating various types of Arts & Crafts at this fair.";
    if (eventComboBox.SelectedIndex == 1)
    descriptionTextBox.Text = "4.30PM. Price 00.00. Take part in cleaning the local Park.";
    if (eventComboBox.SelectedIndex == 2)
    descriptionTextBox.Text = "1.30PM. Price 10.00. Take part in selling goods.";
    if (eventComboBox.SelectedIndex == 3)
    descriptionTextBox.Text = "12.30PM. Price 10.00. Take part in a game of rounders in the local Park.";
    if (eventComboBox.SelectedIndex == 4)
    descriptionTextBox.Text = "11.30PM. Price 15.00. Take part in an Egg & Spoon Race in the local Park";
    if (eventComboBox.SelectedIndex == 5)
    descriptionTextBox.Text = "No Events today.";

    Any help here would be great.
    Look, you have to make the file a XML file type -- Somefilename.xml.
    http://www.xmlfiles.com/xml/xml_intro.asp
    You can use NotePad XML to make the XML and save the text file.
    http://support.microsoft.com/kb/296560
    Or you can just use Notepad (standard), if you know the basics of how to create XML, which is just text data that can created and saved in a text file, which, represents data.
    http://www.codeproject.com/Tips/522456/Reading-XML-using-LINQ
    You can do a (select new CommunityEvent) just like the example is doing a
    select new FileToWatch and load the XML data into the CommunityEvent properties.
    So you need to learn how to make a manual XML textfile with XML data in it, and you need to learn how to use LINQ to read the XML. Linq is not going to work against some  flat text file you created. There are plenty of examples out on Bing and Google
    on how to use Linq-2-XML.
    http://en.wikipedia.org/wiki/Language_Integrated_Query
    <copied>
    LINQ extends the language by the addition of query
    expressions, which are akin to
    SQL statements, and can be used to conveniently extract and process data from
    arrays, enumerable
    classes, XML documents,
    relational databases, and third-party data sources. Other uses, which utilize query expressions as a general framework for readably composing arbitrary computations, include the construction of event handlers<sup class="reference" id="cite_ref-reactive_2-0">[2]</sup>
    or
    monadic parsers.<sup class="reference" id="cite_ref-parscomb_3-0">[3]</sup>
    <end>
    <sup class="reference" id="cite_ref-parscomb_3-0"></sup>

  • FI:where to maintain the data of "Group account number"?

    Hi.When I use the T-CODE FS00 to create G/L account,the sap require me to Make an entry in field "Group account number".But when I press F4 on the field "Group account number",the sap tell me there is no values.So I need to maintain the data of "Group account number".But I don't know where to maintain the data of "Group account number".So I need someone give me the T-CODE or some advice.Thanks so much.

    Hi,
    You have activated group account number. If you have any grp account number of your company please provide the group company account number.
    Else if you dont want to give group account number
    In T-code: OB13 remove group chart of accounts from your chart of accounts
    Best Regards
    Ashish Jain
    Message was edited by:
            Ashish Bohara

  • Data Source Groups in Query no reflecting changes in Admin

    Version of US used: 1.0.3
    I removed some data source groups and add new ones in the Admin Application. But the default Query Application does not reflect these changes. That is, when I go to Advanced Search in the Query Application, the list of Data Source Groups displayed (i.e. the checkboxes) does not include the new ones I added and it still shows the ones I already deleted.
    My question: how to update the data groups in the Advanced Query Screen? Do I have to execute all the schedules again before the updates are reflected?
    Thanks!

    Hi,
    The caching Cindy was referring to is NOT in the browser. It is in the JSP middle-tier.
    The JSP cache the data groups information to avoid fetching from the database everytime,
    since data groups change infrequently.
    In addition, in 1.0.3 version, this cache does not have any invalidation logic. So once the
    search application has started (after first use), the data groups will never change unless
    the application server (apache+jserv) is restarted.
    Please restart the application server to see your changes take place. If you wish, you can
    change the caching logic in the jsp itself. You may implement some trivial invalidation based
    on time, or disable it if your server can handle the load.
    Note: Ultra Search samples in 9.0.2 or later releases have invalidation of cache every 15 mins
    or so.
    David

  • IPhone using extremely high amounts of data when not in use

    I have a 4S on AT&T and am grandfathered in w/ the unlimited data plan. Last month I got the notification that I was in the top 5% of data users (which sounds crazy to me considering what some other people I've read about seem to use). According to AT&T's site I had used 2.09 GB with 16 days left in my billing cycle.
    Once I got that notification I logged on to see if I could find out when I was using such large amounts of data. It turns out between midnight and 1 am almost every night there was large amounts of data sent. I think this is strange since for two reasons, 1) I'm on WiFi at home in my apartment and 2) I wasn't even awake at the times these charges occurred.
    Example: January 6, 2012 at 12:40 am I was charged for 417,327kb, roughly 417MB.
    Similar data usage occurred almost every night going back to 12/28/11. On 12/29 I was charged for 468MB at 12:21am. Very unlikely this was actually data I used since I had work the next day and wasn't awake. It honestly looks like 80-85% of my data usage is coming from these occurrences. It also isn't likely that this is a total of data used throughout the day as there are other entries of smaller amounts spread out throughout the day.
    Now, if I was on a truly unlimited plan and there was no such thing as throttling I really wouldn't care about this. But the fact is that my 3G speeds are being throttled (just ran speed test at a location I used to get over 1Mbps and I am was at .07 Mbps). I have spoken to AT&T and they insisted it was an issue w/ my phone, either the hardware or the OS. So I went to apple and the "Genius" did a DFU restore for me in store and told me that would fix it. It hasn't and now I'm stuck with this constantly happening and unbearably slow 3G data speeds for the rest of the month.
    That was my last billing cycle where I was throttled down to download speeds of .07mbps. Unusable. This month I decided to closely monitor my data usage. I turned off iCloud and photostream, and turn off data whenever I'm using wifi. I turned off the sending info to apple setting.
    I checked my account on ATT.com today and noticed that yesterday morning at 7:58 am while driving to work I somehow used 247 MB. Don't ask me how, I'm not steaming anything and my phone is either locked or playing music from the iPod app. I could probably stream Netflix for 5 hours and not use that much data.
    I called AT&T to let them know (AGAIN) and got bumped up to a "Data manager" who was incredibly b*tchy and rude. She said there must be some app that's using that much data to update. I don't have any apps that are even that size so I don't see how this could possibly be the case. I tried to talk with her about how that can't really be the case and she just kept repeating that there must be some app using the data blah blah. I've gone to apple with this last month after failing with AT&T and at first they did a DFU restore and we restored the phone as new, then it kept happening so I went back they replaced my phone. AT&T said it must be a hardware or software issue last month, but this month it must be an app that's doing this. I can't get a straight answer from them and they just keep passing the ball to say it's Apple's problem.
    I really don't know what to do. I'm grandfathered in on the unlimited plan and don't really want to change that. Say I change my plan to the 3GB's at the same $30/month. What's to say this won't keep happening and I'll be over that 3GB's in 10 days then get charged an extra $10 for each additional GB?!
    I can't think of any apps I've downloaded in the past few months that would result in this change and I can't live with another 2/3 of my billing cycle being throttled down to unusable speeds.
    Has anybody else had anything similar happen? I have no idea what to do next (and sorry for the long rant but I'm fresh off the phone with AT&T and I'm ******)

    Thanks for the link. I only went through the last three pages and unfortunately it looks like there is no solution. I turn data off every night and turn it on only when I'm not in a wifi area. It seems that as soon as I turn it on, within an hour it charges me data for a backlog of whatever it didn't do when it was connected to wifi.
    I think it's absolutely disgusting that AT&T can just dismiss this and act like it's the users fault. Apps and email updating in the middle of the night to the tune of 400MB? I highly doubt it and then when I called to ask them about it they try to make me sound stupid like I have an app or two open. Just really ticked me off. The worst part about it is that there doesn't appear to be anything I can do about it aside from never using my data.

  • UME using SAP R/3 as Data Source

    Hi,
    We are trying to set User authentication to SAP R/3 system, not load balanced system, on the User Management Configuration values: Client=501, Userid=sapjsf, Password=pwd, sys id=RS1, Group and Message server= blank, Application server= server.company.com, Sys. number=00, Max pool=10, Max wait=300000.
    When testing connection, I get this message:
    (System ID): com.sap.mw.jco.JCO$Exception: (101) RFC_ERROR_PROGRAM: 'mshost' missing
    (System ID & System Number): OK
    Is this an error? since our SAP R/3 is not a load balanced system.
    Did we miss any item for the setup, in dataSourceConfiguration_r3.xml? The SAPJSF "communication user" got the right sap role and authorizations.
    Portal version : EP6 SR1
    Regards
    Huzaifah

    Hi,
    If u want to Use The SAP R3 System as Data source u may
    do it from config tool if u got following message.
    WARNING! You are not allowed to select dataSourceConfiguration_r3.xml as active configuration file.
    (For Portal Patch less than SP13 u must download two data source file which is attached with note - 718383
    and upload it to portal which is described in the note)
    the following are the procedure which i apply ,
    Go to System Administration -> System Configuration ->UM  Configuration
    Now Do not change Data source from Here.
    Make sure  your data source is "Database Only"
    (dataSourceConfiguration_database_only.xml)
    Now enter the following value under SAP System Tab.
    Client : - Your sap system client
    User:-  Sap user
    password: - password
    System language:- your system language
    Application server: - Host name or IP of sap system
    System Number : -  SAP instance number
    Maximum Size of Connection Pool : -  As per req.
    Maximum Wait Time in Milliseconds :- 10000
    Now, save the changes and shutdown the portal server.
    Using Config Tool change the data source. Run the following
    <drive:\> usr\sap\<sid>\JC<instance number>\J2EE\configtool\configtool.bat
    (Make sure the portal system is shutdown)
    Under Cluster Data -> Global Server Configuration -> services -> com.sap.security.core.ume.services
    Now find the key: -  ume.persistence.data_source_configuration     
    The default was : - dataSourceConfiguration_database_only.xml
    change the value to :- dataSourceConfiguration_r3.xml     
    click on set and from flie-> apply
    Now restart the portal server ur data source changer to SAP R3 System
    Regards,
    Kaushal

  • [Forum FAQ] How do I export each group data to separated Excel files in Reporting Services?

    Introduction
    There is a scenario that a report grouped by one field for some reasons, then the users want to export each group data to separated Excel files. By default, we can directly export only one file at a time on report server. Is there a way that we can split
    the report based on the group, then export each report to Excel file?
    Solution
    To achieve this requirement, we can add a parameter with the group values to filter the report based on the group, then create a data-driven subscription for the report which get File name and parameter from the group values.
    In the report, create a parameter named Name which use the Name field as Available Values (supposing the group grouped on Name field).
    Add a filter as below in the corresponding tablix:
    Expression: [Name]
    Operator: =
    Value: [@Name]
    Deploy the report. Then create a data-driven subscription with Windows File Share delivery extension for the report in Report Manager.
    During the data-driven subscription, in the step 3, specify a query that returns the Name field with the values as the group in the report.
    In the step 4 (Specify delivery extension settings for Report Server FileShare), below “File name”option, select “Get the value from the database”, then select Name field.
    Below ‘Render Format’ option, select Excel as the static value.
    In the step 5, we can configure parameter Name “Get the value from the database”, then select Name field. 
    Then specify the subscription execute only one time.
    References:
    Create a Data-Driven Subscription
    Windows File Share Delivery in Reporting Services
    Applies to
    Reporting Services 2005
    Reporting Services 2008
    Reporting Services 2008 R2
    Reporting Services 2012
    Please click to vote if the post helps you. This can be beneficial to other community members reading the thread.

    Thanks,
    Is this a supported scenario, or does it use unsupported features?
    For example, can we call exec [ReportServer].dbo.AddEvent @EventType='TimedSubscription', @EventData='b64ce7ec-d598-45cd-bbc2-ea202e0c129d'
    in a supported way?
    Thanks! Josh

  • Best way to group data and show details of only last result in group

    Hi All! First off, just letting everyone know that this is a great place to explore and learn Oracle - I've learned more here than I have in some classes. Just exploring the forums and searching for an answer leads me to functions that I hadn't otherwise known existed.
    Here's what I'm looking to accomplish now... let's say I have a table that holds family information - if two or more persons are related in a family (determined by a separate table) then it should return the person identification and then the details of the group.
    For instance, the following data is contained in two tables, the current result follow and then the result I'm looking for...
    PERSONS TABLE
    PERSON         PERSON_ID           ADDRESS
    John Smith     101                     1 Oracle Drive
    Jane Smith     102                     1 Oracle Drive
    RELATIONSHIPS TABLE
    PERSON_ID      RELATEDPERSON_ID
    101                 102
    102                 101A simple query would result in the following:
    WITH PERSONS AS
      SELECT 'John Smith' AS person, 101 AS person_id, '101 Oracle Drive' AS address FROM dual union all
      SELECT 'Jane Smith', 102, '101 Oracle Drive' FROM dual
    ,    RELATIONSHIPS AS
      SELECT 101 AS person_id, 102 AS relatedperson_id FROM dual union all
      SELECT 102, 101 FROM dual
    SELECT
        person
      , address
    FROM  PERSONS p
    JOIN  RELATIONSHIPS r ON r.person_id = p.person_idRESULT
    PERSON      ADDRESS
    John Smith      101 Oracle Drive
    Jane Smith      101 Oracle DriveI'm looking to generate the following result but I'm not sure how to accomplish this... I'm confident it's something simple.
    DESIRED RESULT
    PERSON      ADDRESS
    John Smith     
    Jane Smith      101 Oracle DriveNotice that the address for the related family members is not displayed until the last family member is returned. It would repeat this process for each family.
    Thanks everyone for any help you can provide! 11g
    Edited by: nage62587 on Oct 16, 2012 8:20 PM

    Hi,
    nage62587 wrote:
    I've done a lot of forum searching and revised my question a bit to hopefully make things a bit clearer... I've taken a query Frank wrote and revised to meet my criteria; Searching the forum (and other places on the web) is great! Not only do you find things yourself, but people on this forum are more likely to help you when they see that you're doing all you can.
    If you find something that you try to adapt, post a link to it. Seeing the correct way to adapt it can be very instructive.
    essentially, if I can determine what relatives a person has, then I can create a unique "FAMILY_ID" for them - once I have that, it would appear as though I could then use the FAMILY_ID to group their addresses and other info together.
    The problem I'm having is that if a RELATEDPERSON_ID is linked to a PERSON_ID (PERSON_ID linked to RELATEDPERSON_ID works great), it assigns them a new FAMILY_ID rather than including them into the correct family.
    Here's my sql
    WITH PERSONS AS
    SELECT 'John Smith' AS person, 101 AS person_id, '1 Oracle Drive' AS address FROM dual union all
    SELECT 'Jane Smith', 102, '1 Oracle Drive' FROM dual union all
    SELECT 'Jack Smith', 103, '8 Oracle Drive' FROM dual union all
    SELECT 'John Doe', 104, '10 Oracle Drive' FROM dual union all
    SELECT 'Jane Doe', 105, '10 Oracle Drive' FROM dual union all
    SELECT 'Pete Smith', 106, '1 Oracle Drive' FROM dual
    ,    RELATIONSHIPS AS
    SELECT 101 AS person_id, 102 AS relatedperson_id FROM dual union all
    SELECT 102, 101 FROM dual union all
    SELECT 104, 105 FROM dual union all
    SELECT 105, 104 FROM dual union all
    SELECT 106, 101 FROM dual
    , table_x
    AS
    SELECT   person_id         AS col1
    ,        relatedperson_id  AS col2
    FROM     relationships
    ,     got_relatives     AS
         SELECT     col1
         ,     CONNECT_BY_ROOT col2     AS relative
         FROM     table_x
         CONNECT BY NOCYCLE     col1     =  col2
    OR  col2  =  col1
    SELECT       col1
    ,       DENSE_RANK () OVER ( ORDER BY  MIN (relative)
    ) AS family_id
    FROM      got_relatives
    GROUP BY  col1The result I expect is:
    COL1   FAMILY_ID
    102     1
    106     1
    101     1
    105     2
    104     2
    I suspect that whatever you copied originally had the PRIOR keyword somewhere in the CONNECT BY clause.
    Here's one way to do what you want:
    WITH    all_relationships     AS
         SELECT  person_id
         ,     relatedperson_id
         FROM     relationships
        UNION
         SELECT  relatedperson_id     AS person_id
         ,     person_id          AS relatedperson_id
         FROM     relationships
    ,     got_relatives     AS
         SELECT     CONNECT_BY_ROOT person_id     AS person_id
         ,     relatedperson_id
         FROM     all_relationships
         CONNECT BY NOCYCLE     person_id       = PRIOR relatedperson_id
                 OR          relatedperson_id  = PRIOR person_id
    ,     got_family_id     AS
         SELECT       person_id
         ,       MIN (relatedperson_id)     AS family_id
         ,       ROW_NUMBER () OVER ( PARTITION BY  MIN (relatedperson_id)
                                      ORDER BY          person_id     DESC
                                    )            AS r_num
         FROM       got_relatives
         GROUP BY  person_id
    SELECT       p.person
    ,       CASE
               WHEN  f.r_num  = 1
               THEN  p.address
           END          AS address
    ,       p.person_id
    ,       f.family_id     
    FROM       got_family_id  f
    JOIN       persons      p  ON  p.person_id  = f.person_id
    ORDER BY  family_id
    ,            person_id
    ;Output:
    PERSON     ADDRESS          PERSON_ID  FAMILY_ID
    John Smith                        101        101
    Jane Smith                        102        101
    Pete Smith 1 Oracle Drive         106        101
    John Doe                          104        104
    Jane Doe   10 Oracle Drive        105        104Obviously, you don't have have to display all the columns I displayed above In your first message, you said you wanted only person and address. In your last message, you said you only wanted person_id and family_id. Change the main SELECT clause any way you wish.
    I used the lowest person_id in each family as the family_id. You could use DENSE_RANK if you really wanted to have the families numbered 1, 2, 3, ..., but I suspect you don't really care what family_id is, as long as all the members of the family have the same value.
    You relationship table has some symmetrical relationships, such as
    SELECT 104, 105 FROM dual union all
    SELECT 105, 104 FROM dual union alland some asymmetrical relationships. For example, the only relationship involving person_id=106 is
    SELECT 106, 101 FROM dualthat is, there is no mirror-image row:
    -- SELECT 101, 106 FROM dual union all   -- THIS IS NOT IN THE SAMPLE DATAI assume there is no significance to that. As long as 101 and 106 appear on the same row, they are in the same family, regardless of which is the person_id and which is the relatedperson_id.
    So the first thing I did above was make sure all the mirror-image rows were represented. That's what all_relationships does.
    The next sub-query, got_relatives, is probably what you meant to adapt, but you left off the PRIOR operators.
    Got_family_id actually does the grouping, and also computes r_num to determine which is the last member of the family. Only that family member's address will be displayed in the main query.
    You could combine got_family_id and the main query; you don't really need a sub-query for that. It would be a little less coding, but I wrote it this way because I think it's a little easier to understand and maintain.

  • Start to finish, grouped data to xmlp

    Can anyone outline for me the best process by which I would query multi-level grouped data, generating subtotals in the database, and then pass that data to the XML publisher report, with the notion of grouping and subtotals communicated to the report? I don't want to perform the grouping in the report, as it involves large numbers of records. I can find in various places how to generate XML from the database using SQLX, but I don't understand how to get that XML to be used in a report as a datasource. I can paste it in as a query in the MS Word Template builder, but don't know how to get it into the XMLP Enterprise side of things.
    So, given this example query, could someone tell me how to get the data and the meaning of the data into XMLP?
    select emp.deptno,emp.mgr,emp.empno, sum(emp.sal)
    from emp inner join dept
    on (emp.deptno=dept.deptno)
    group by
    rollup(emp.deptno,emp.mgr,emp.empno)

    Oh, yes, I've tried...but I'm at a loss as to how to put the various pieces of XML Publisher together with a data template in the mix. Can anyone give an example of how one would actually go from my example query through data template and .rtf template to final report in XML Publisher Enterprise?

Maybe you are looking for