StreamSource, Singleton, Transformation and performance

Maybe my issue is more of design issue (disclaimer: yes, I am new to Java).
I am sure this problem has been solved several times ... so I hope someone can offer their sagely advice.
I have a tomcat (5.5) app that I am working on. I am currently using xsl to filter/do my queries (and maybe I need a different approach there, so I'm open). This is really about a query not taking 5 minutes. I thought th XSL would be the better way to do that.
I have implemented a singleton to hold the XML StreamSource in memory and that seems to be working. I verified the singleton is loading and staying, but it's still taking the same amount of time to execute the query (apply the xsl).
Is it because the StreamSource still has to Stream (i.e. I/O operation)? What is a more efficient memory (RAM is not an issue in this case) of loading and holding an XML to perform queries (XPath/XSL) against?
Below is the Singleton Implemenation:
public class Lookup {
     private static Lookup instance = null;
     private static StreamSource theXML = null;
    private static String xmldoc_filename = "/path/to/17MBPlusOf.xml";
    protected Lookup ()
       loadXML(xmldoc_filename);
    public static synchronized Lookup getInstance() {
        if( instance == null ) {
            instance = new Lookup();
        return instance;
    private void loadXML(String filename)
        if( theXML == null ) {
               theXML = new StreamSource(xmldoc_filename);
   public String xslTransform( String query ) throws TransformerException
          String xslFile = "/path/to/XSLFile.xsl";
          String xmlString;
          TransformerFactory tFactory = TransformerFactory.newInstance();
          // do the transformation here
          StreamSource theXSL = new StreamSource(xslFile);
          Transformer xslTrans = tFactory.newTransformer(theXSL);
          xslTrans.setParameter("qryText", query);
          ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
          Result res = new StreamResult(byteStream);
          xslTrans.transform(theXML, res);
          xmlString = byteStream.toString();
          return xmlString;
     }

java5on wrote:
Thanks .... will try it tomorrow ... but I don't see how caching a 30-50 line xsl chunk is going to speed up the process in comparison to caching the 34+K lines of xml.
In Java, what would be the overall preferred method to load/cache a large amount of data (XML) for processing (parsing/querying) by multiple clients/queries? oh, i didn't completely follow what you were doing there. You should probably parse the xml into a Document and use a DOMSource for your transform call. I don't know if Documents are thread-safe, so you may need a similar ThreadLocal caching scheme here as well.

Similar Messages

  • Singleton, Concurrency and Performance

    Dear all,
    Below is part of the code of my Singleton ServiceLocator:
    public class ServiceLocator {
        private InitialContext ic;
        private Map cache;
        private static ServiceLocator me;
        static {
          try {
            me = new ServiceLocator();
          } catch(ServiceLocatorException se) {
            System.err.println(se);
            se.printStackTrace(System.err);
        private ServiceLocator() throws ServiceLocatorException  {
          try {
            ic = new InitialContext();
            cache = Collections.synchronizedMap(new HashMap());
          } catch (NamingException ne) {
                throw new ServiceLocatorException(ne);
          } catch (Exception e) {
                throw new ServiceLocatorException(e);
        public static ServiceLocator getInstance() {
          return me;
        public EJBLocalHome getLocalHome(String jndiHomeName) throws ServiceLocatorException {
          EJBLocalHome home = null;
          try {
            if (cache.containsKey(jndiHomeName)) {
                home = (EJBLocalHome) cache.get(jndiHomeName);
            } else {        
                home = (EJBLocalHome) ic.lookup(jndiHomeName);
                cache.put(jndiHomeName, home);
           } catch (NamingException ne) {
                throw new ServiceLocatorException(ne);
           } catch (Exception e) {
                throw new ServiceLocatorException(e);
           return home;
    I 've some questions concerning the concept of Singleton:
    1. Assume using the code above. If 2 threads access the getInstance method at the same time, is that one of the thread will get the instance and the other thread will get null or queued up and wait?
    2. If the answer of question #1 is positive, can we conclude that Singleton will lower the performance of web application which needs high concurrency?
    3. Should we create a pool of ServiceLocators instead of making it Singleton?
    4. For EJBs looking up EJBs, why we should not use Singleton ServiceLocator?
    Thanks in advance.
    Jerry.

    I 've some questions concerning the concept of
    Singleton:
    1. Assume using the code above. If 2 threads access
    the getInstance method at the same time, is that one
    of the thread will get the instance and the other
    thread will get null or queued up and wait?If I read correctly your code, your initialize the singleton instance statically (beforehand).
    The first thread that invokes getInstance will be somehow "queued up" while the VM loads and initializes the ServiceLocator class (including executing the static block that initializes the singleton instance).
    Afterwards, for the lifetime of this VM, all subsequent threads that will invoke getInstance will undergo no penalty, as no synchronization is involved.
    Well-known threading issues involving singleton access (search "Double-checked locking") appear only with code that wants to perform "lazy loading" without synchronization.
    Performance issues (or supposed performance issues) incurred by synchronization only occur to code that uses synchronized blocks or methods, obviously.
    2. If the answer of question #1 is positive, can we
    conclude that Singleton will lower the performance of
    web application which needs high concurrency?First, I woudn't worry too much about it prematurely.
    Next, if this singleton happens to be a performance bottleneck, it wouldn't be in the getInstance but merely in your access to the synchronized collection (which, by the way, does not prevent an EJBLOcalHome to be looked up and inserted twice :o)
    But I sincerely doubt it could be anything measurable.
    So go ahead with a plain simple singleton this way. If you happen to hit a performance problem, profile it and come back with the results.
    3. Should we create a pool of ServiceLocators instead
    of making it Singleton?You need to pool objects when their methods use up a lot of serial time.
    Here I really think the getLocalHome will be fast enough to not matter compared to everything else (DB access, file access, network latency, your business logic,...).
    If if it did, there would be other means (read-write lock) to lower the serial cost.
    Moreover, pooling your locator would divide your cache efficiency (hits/hits+misses) by the number of instances...
    >
    4. For EJBs looking up EJBs, why we should not use
    Singleton ServiceLocator??

  • Regarding Transformation and Data Transfer Process(DTP)

    Dear Gurus
    1) Transformation replaces the transfer rule and update rule.
    2) DTP replaces the info package.
    Hencse, there is a some advantages are there in Transformation and DTP.
    I couldn't understand in help.sap.com.
    Could you please tell me in simple language what all are the new features are there in Transformation and DTP which is not possible in Transfer rule, Update rule and infopackage.
    Thanks and Regards
    Raja

    Hi Raja,
    These are the advantages of DTP and Transformation over their predecessors.
    DTP:1)Improved transparency of staging processes across data warehouselayers (PSA, DWH layer, ODS layer, Architected Data Marts)
    2)Improved performance:Intrinsic parallelism
    3)Separation of delta mechanism for different data targets: delta capability is controlled by the DTP
    4)Enhanced filtering in data flow
    5)Repair modes based on temporary buffers (buffers keep complete set of data)
    Transformation:   SAP NetWeaver2004s significantly improves transformation capabilities. The improved graphical UI contributes to decrease TCO.
    1)Transformation Improved performance, flexibility and usability 
    2)Graphical UI
    3)Unification and simplification of transfer and update rules
    4)New rule type: end routine
    5)New rule type: expert routine (pure coding of transformation)
    6)Unit Conversion Capabilities for unit conversion during data load (and reporting)
    Assign points if it is helpful.
    Cheers,
    Bharath

  • ASCII character/string processing and performance - char[] versus String?

    Hello everyone
    I am relative novice to Java, I have procedural C programming background.
    I am reading many very large (many GB) comma/double-quote separated ASCII CSV text files and performing various kinds of pre-processing on them, prior to loading into the database.
    I am using Java7 (the latest) and using NIO.2.
    The IO performance is fine.
    My question is regarding performance of using char[i] arrays versus Strings and StringBuilder classes using charAt() methods.
    I read a file, one line/record at a time and then I process it. The regex is not an option (too slow and can not handle all cases I need to cover).
    I noticed that accessing a single character of a given String (or StringBuilder too) class using String.charAt(i) methods is several times (5 times+?) slower than referring to a char of an array with index.
    My question: is this correct observation re charAt() versus char[i] performance difference or am I doing something wrong in case of a String class?
    What is the best way (performance) to process character strings inside Java if I need to process them one character at a time ?
    Is there another approach that I should consider?
    Many thanks in advance

    >
    Once I took that String.length() method out of the 'for loop' and used integer length local variable, as you have in your code, the performance is very close between array of char and String charAt() approaches.
    >
    You are still worrying about something that is irrevelant in the greater scheme of things.
    It doesn't matter how fast the CPU processing of the data is if it is faster than you can write the data to the sink. The process is:
    1. read data into memory
    2. manipulate that data
    3. write data to a sink (database, file, network)
    The reading and writing of the data are going to be tens of thousands of times slower than any CPU you will be using. That read/write part of the process is the limiting factor of your throughput; not the CPU manipulation of step #2.
    Step #2 can only go as fast as steps #1 and #3 permit.
    Like I said above:
    >
    The best 'file to database' performance you could hope to achieve would be loading simple, 'known to be clean', record of a file into ONE table column defined, perhaps, as VARCHAR2(1000); that is, with NO processing of the record at all to determine column boundaries.
    That performance would be the standard you would measure all others against and would typically be in the hundreds of thousands or millions of records per minute.
    What you would find is that you can perform one heck of a lot of processing on each record without slowing that 'read and load' process down at all.
    >
    Regardless of the sink (DB, file, network) when you are designing data transport services you need to identify the 'slowest' parts. Those are the 'weak links' in the data chain. Once you have identified and tuned those parts the performance of any other step merely needs to be 'slightly' better to avoid becoming a bottleneck.
    That CPU part for step #2 is only rarely, if every the problem. Don't even consider it for specialized tuning until you demonstrate that it is needed.
    Besides, if your code is properly designed and modularized you should be able to 'plug n play' different parse and transform components after the framework is complete and in the performance test stage.
    >
    The only thing that is fixed is that all input files are ASCII (not Unicode) characters in range of 'space' to '~' (decimal 32-126) or common control characters like CR,LF,etc.
    >
    Then you could use byte arrays and byte processing to determine the record boundaries even if you then use String processing for the rest of the manipulation.
    That is what my framework does. You define the character set of the file and a 'set' of allowable record delimiters as Strings in that character set. There can be multiple possible record delimiters and each one can be multi-character (e.g. you can use 'XyZ' if you want.
    The delimiter set is converted to byte arrays and the file is read using RandomAccessFile and double-buffering and a multiple mark/reset functionality. The buffers are then searched for one of the delimiter byte arrays and the location of the delimiter is saved. The resulting byte array is then saved as a 'physical record'.
    Those 'physical records' are then processed to create 'logical records'. The distinction is due to possible embedded record delimiters as you mentioned. One logical record might appear as two physical records if a field has an embedded record delimiter. That is resolved easily since each logical record in the file MUST have the same number of fields.
    So a record with an embedded delimiter will have few fields than required meaning it needs to be combined with one, or more of the following records.
    >
    My files have no metadata, some are comma delimited and some comma and double quote delimited together, to protect the embedded commas inside columns.
    >
    I didn't mean the files themselves needed to contain metadata. I just meant that YOU need to know what metadata to use. For example you need to know that there should ultimately be 10 fields for each record. The file itself may have fewer physical fields due to TRAILING NULLCOS whereby all consecutive NULL fields at the of a record do not need to be present.
    >
    The number of columns in a file is variable and each line in any one file can have a different number of columns. Ragged columns.
    There may be repeated null columns in any like ,,, or "","","" or any combination of the above.
    There may also be spaces between delimiters.
    The files may be UNIX/Linux terminated or Windows Server terminated (CR/LF or CR or LF).
    >
    All of those are basic requirements and none of them present any real issue or problem.
    >
    To make it even harder, there may be embedded LF characters inside the double quoted columns too, which need to be caught and weeded out.
    >
    That only makes it 'harder' in the sense that virtually NONE of the standard software available for processing delimited files take that into account. There have been some attempts (you can find them on the net) for using various 'escaping' techniques to escape those characters where they occur but none of them ever caught on and I have never found any in widespread use.
    The main reason for that is that the software used to create the files to begin with isn't written to ADD the escape characters but is written on the assumption that they won't be needed.
    That read/write for 'escaped' files has to be done in pairs. You need a writer that can write escapes and a matching reader to read them.
    Even the latest version of Informatica and DataStage cannot export a simple one column table that contains an embedded record delimiter and read it back properly. Those tools simply have NO functionality to let you even TRY to detect that embedded delimiters exist let alone do any about it by escaping those characters. I gave up back in the '90s trying to convince the Informatica folk to add that functionality to their tool. It would be simple to do.
    >
    Some numeric columns will also need processing to handle currency signs and numeric formats that are not valid for the database inpu.
    It does not feel like a job for RegEx (I want to be able to maintain the code and complex Regex is often 'write-only' code that a 9200bpm modem would be proud of!) and I don't think PL/SQL will be any faster or easier than Java for this sort of character based work.
    >
    Actually for 'validating' that a string of characters conforms (or not) to a particular format is an excellent application of regular expressions. Though, as you suggest, the actual parsing of a valid string to extract the data is not well-suited for RegEx. That is more appropriate for a custom format class that implements the proper business rules.
    You are correct that PL/SQL is NOT the language to use for such string parsing. However, Oracle does support Java stored procedures so that could be done in the database. I would only recommend pursuing that approach if you were already needing to perform some substantial data validation or processing the DB to begin with.
    >
    I have no control over format of the incoming files, they are coming from all sorts of legacy systems, many from IBM mainframes or AS/400 series, for example. Others from Solaris and Windows.
    >
    Not a problem. You just need to know what the format is so you can parse it properly.
    >
    Some files will be small, some many GB in size.
    >
    Not really relevant except as it relates to the need to SINK the data at some point. The larger the amount of SOURCE data the sooner you need to SINK it to make room for the rest.
    Unfortunately, the very nature of delimited data with varying record lengths and possible embedded delimiters means that you can't really chunk the file to support parallel read operations effectively.
    You need to focus on designing the proper architecture to create a modular framework of readers, writers, parsers, formatters, etc. Your concern with details about String versus Array are way premature at best.
    My framework has been doing what you are proposing and has been in use for over 20 years by three different major nternational clients. I have never had any issues with the level of detail you have asked about in this thread.
    Throughout is limited by the performance of the SOURCE and the SINK. The processing in-between has NEVER been an issu.
    A modular framework allows you to fine-tune or even replace a component at any time with just 'plug n play'. That is what Interfaces are all about. Any code you write for a parser should be based on an interface contract. That allows you to write the initial code using the simplest possible method and then later if, and ONLY if, that particular module becomes a bottlenect, replace that module with one that is more performant.
    Your intital code should ONLY use standard well-established constructs until there is a demonstrated need for something else. For your use case that means String processing, not byte arrays (except for detecting record boundaries).

  • Risky enable star transformations and trusted Query Rewrites?

    Hi,
    I need some advice/opinions from someone experienced with large scale
    data warehousing.
    I'm working on a fairly large data warehouse (around 3 TB), and we're
    using Oracle 10.1.0.2.0.
    So, I found out about MV's and Star Transformations, and that we're not
    using them.
    Naturally I decided to try them out in our test environment and I was
    more than pleased (actually, I nearly wet my pants) with the potential
    performance boost we could get for some of our more critical solutions.
    However, I also noticed that the production environment has the
    following settings:
    star_transformation_enabled = false
    query_rewrite_integrity = enforced
    ...which basically disables all the cool stuff. In the testing
    environment I used the following:
    star_transformation_enabled = true
    query_rewrite_integrity = trusted (to make use of func. dep in
    dimensions)
    I would like to stand on somewhat solid grounds and increase my
    understanding before aproaching our DBA's with the suggestion to change
    system global settings :)
    Basically, my question(s) are:
    1. What are the impact of enabling Star Transformations on a system?
    Is there any at all, if no previous solution has been built in a way
    to
    make use of star transformations?
    Or could this change result in fine-tuned queries performing badly
    since they
    suddenly make use of star transformations?
    2. Is "query_rewrite_integrity" used by Oracle for other things besides
    Materialized Views?
    I'm thinking, if the only thing it's used for is to resolve query
    rewrites for MV's, then it's safe to change it, because there are no
    such MV's.
    Note that I'd like to set it to TRUSTED, in order to make real use
    of the dependencies declared with CREATE DIMENSION...
    I would be happy to know what you think about this.
    Any thoughts, opinions are welcome since this is new grounds for me.
    Best Regards
    R.

    Following parameters are deprecated in release 10.2.
    LOGMNR_MAX_PERSISTENT_SESSIONS
    MAX_COMMIT_PROPAGATION_DELAY
    REMOTE_ARCHIVE_ENABLE
    SERIAL_REUSE
    SQL_TRACE
    Check this in your parameter file.
    As per Oracle Errors Documents.
    Error : ORA-32004
    Cause:     One or more obsolete and/or parameters were specified in the
         SPFILE or the PFILE on the server side.
    Action:     See alert log for a list of parameters that are obsolete. or
         deprecated. Remove them from the SPFILE or the server side PFILE
    Regards,
    Sabdar Syed.

  • Using transforms and filters without device drivers

    Hello,
    I came across NIMS as a possible solution for some transforms and filtering, possibly even generating test signal data, for a seismic application. I'm in the process of evaluating NIMS for best possible fit for what we need/want to accomplish.
    Basically, we've got some seismic data, and we want to process that data through a series of transforms and filters to denoise and pick the data for seismic analysis. No sense reinventing the wheel if we can adopt and then adapt a third-party library like NIMS into our app.
    We do not necessarily need any device drivers, although I noticed installing NIMS requires them. Hopefully we can opt in or out depending on what's actually required. Can someone help clarify the nature of the driver dependency?
    Anyhow, like I said I am evaluating it for best possible fit in our application, but in the meantime if someone can shed some light on the above concerns, questions, etc, would begreat.
    Thank you...
    Best regards.
    Solved!
    Go to Solution.

    Glad to hear it!
    -Mike
    Applications Engineer
    National Instuments

  • Is there a way that i can downgrade my iOS 7.1 on my iPhone 4 to iOS 6xx? battery life not good, and performance isn't better than iOS 6.. Please apple i am really disappointed with iOS 7 on my iPhone 4

    Is there a way that i can downgrade my iOS 7.1 on my iPhone 4 to iOS 6xx? battery life not good, and performance isn't better than iOS 6.. Please apple i am really disappointed with iOS 7 on my iPhone 4, it can runs great on iPhone above 4 such as 5/5s/etc.. iPhone 4 just good with iOS 6...

    No.

  • SPM Transformation and Routine are missing in Delivered

    Hi All,
            During the transport(From Devlopment to Quality) we received an error message as 'Transformation and Routine are not available in A Version'. Hence the transport has got failed.
    I tried searching this Object(DSO - 0ASA_DS00) in RSOR - Business content, but all these objects are missing in the Delivered content itself. The Object and System details are as follows,
    Object : DSO - 0ASA_DS00
                 TRFN - The TRFN 0E2R8KR45M0H68POUF6JWA5J7TNA4DS9 is not available in Delivered content
                 TRFN - The TRFN 0MM70WOQSHKX3SSQEDTOL7UAK90H15G8 is available in Delivered content, but the corresponding Routine is not available in delivered content
                 TRFN - The TRFN 0E2R8KR45M0H68POUF6JWA5J7TNA4DS9 is not available in Delivered content
    SAP_ABA          701          0008     SAPKA70108
    SAP_BASIS                          701          0008     SAPKB70108
    PI_BASIS                           701          0008     SAPK-70108INPIBASIS
    ST-PI                                  2008_1_700                                                0003     SAPKITLRD3
    SAP_BW          701          0008     SAPKW70108
    ANAXSA          210          0005     SAPK-21005INANAXSA
    BI_CONT          704          0009     SAPK-70409INBICONT
    OPMFND          210          0005     SAPK-21005INOPMFND
    ST-A/PI          01M_BCO700     0001     SAPKITAB7F
    When I tried collecting object (DSO - 0ASA_DS00) with 'Only necessary object', I am not facing this error, but with 'In DataFlow before & After', we are facing this issue.
    We also found 3 objects (TRFN's & Routines) are missing for PO & Invoice Process chains in the delivered content itself.
    We need to transport this object at the earliest, can anyone please guide me, how to proceed on this?
    Thanks..
    Regards
    Santhosh Kumar N

    Hi Rajesh,
                Thanks for the guidance. I have checked the TRFN's & Routines in the Business content, but they are missing in the Delivered Version itself.
    When we tried transporting the Spend Process chains for PO & Invoices, we are facing this issue.
    Please let me know, if we need exclude this missing objects and start transporting Or Is there any other options to get these missing objects in?
    Please find below the missing object details,
    1. The TRFN 0E2R8KR45M0H68POUF6JWA5J7TNA4DS9 is not available in Delivered content
    2. The TRFN 0MM70WOQSHKX3SSQEDTOL7UAK90H15G8 is available in Delivered content, but the corresponding Routine is not available in delivered content.
    3. The TRFN 0E2R8KR45M0H68POUF6JWA5J7TNA4DS9 is not available in Delivered content.
    4. The Object '8CXV3JV4FQR295IVBCHN69QBU' of type 'Routine' is not available in Delivered content.
    Thanks
    Regards
    Santhosh Kumar N

  • Forms and Reports: Automated Test tools - functionality AND performance

    All,
    I'm looking to get a few leads on an automated test tools that may be used to validate Oracle forms and reports (see my software configuration below). I'm looking for tools that can automate both functional tests and performance. By this I mean;
    Functional Testing:
    * Use of shortcut keys
    * Navigation between fields
    * Screen organisation (filed locations)
    * Exercise forms validation (bad input values)
    * Provide values to forms and simulate user commit, and go and verify database state is as expected
    Performance Testing:
    * carry out tests for fixed user load
    * carry out tests for scaled step increase in user load
    * automated collection of log files and metrics during test
    So far I have:
    http://www.neotys.com/
    Thanks in advance for your response.
    Mathew Butler
    Configuration:
    Red Hat Enterprise Linux x86-64 architecture v4.5 64 bit
    Oracle Application Server 10.1.2.0.2 ( with patch 10.1.2.3 )
    Oracle Developer Suite (Oracle Forms and Reports) V10.1.2.0.2 ( with patch 10.1.2.3 )
    Oracle JInitiator 1.3.1.17 or later
    Microsoft Internet Explorer 6

    are there any tools for doing this activity like oracle recommended tools?
    Your question is unclear.  As IK mentioned, the only tool you need is a new version of Oracle Forms/Reports.  Open your v10 modules in a v11 Builder and select Save.  You now have a v11 module.  Doing a "Compile All PL/SQL" before saving is a good idea, but not required.  The Builders and utilites provided with the version 11 installation are the only supported tools for upgrading your application.  If you are trying to do the conversion of many Forms files in a scripted manner, you can use the Forms compiler in a script.  Generating new "X" files will also update the source modules (fmb, mmb, pll).  See MyOracleSupport Note 955143.1
    Also included in the installation in the Forms Migration Assistant.  Although it is more useful to people coming from older versions, it can also be used to move from v10 to 11.  It allows you to select more than one file at a time.  Documentation for this utility can be found in the Forms Upgrade Guide.
    Using the Oracle Forms Migration Assistant

  • ASO MDX member formula and performance

    Hi,
    I am doing some testing about MDX formulas and performance. I found a performance issue but I can not understand why is taking so long time a report.
    The situation is:
    I create a report or a MDX query with:
    6 dimensions in row and 1 dimension in column
    rows:
    Period - Filtered using a member
    Year - Filtered using a member
    Relationship Manager - Filtered using a member
    Report Type - Filtered using a member
    Local Relationship Manager - 4400 members level 0
    Global Relationship Manager - 10400 members level 0
    Column:
    Account dimension, only a member
    The member selected for Report Type (RM.Local) has a formula
    My Report Type dimension has 10 members, there is one member where I store data called : RM.Input
    My first test was
    RM.Local his formula is [RM.Input] , the report is run in 1 second
    RM.Local his formula is ([RM.Input],[MTD]) where MTD is a member level 0 store in my view dimension. The report run in 20 minutes. I was not expecting so bad performance when I only pointing at [RM.Input],[MTD]
    Do you consider this time is reasonable when I am using this formula?
    The mdx report is:
    With
    set [_Local Relationship Manager3] as 'Descendants([All Local Relationship Managers], 2)' = level 0 members
    set [_Global Relationship Manager4] as '[Global Relationship Manager].Generations(4).members' = level 0 members
    set [_Period0] as '{[Period].[Oct]}'
    set [_Relationship Manager4] as '{[Relationship Manager].[Dummy1)]}'
    set [_Report Type0] as '{[Report Type].[RM.Local]}'
    set [_Year2] as '{[Year].[FY-2013]}'
    select
    { [Account].[Expenses]
    } on columns,
    NON EMPTY {crossjoin({[_Local Relationship Manager3]},crossjoin({[_Global Relationship Manager4]},crossjoin({[_Period0]},crossjoin({[_Relationship Manager4]},crossjoin({[_Report Type0]},{[_Year2]})))))} properties MEMBER_NAME, GEN_NUMBER, [Global Relationship Manager].[MEMBER_UNIQUE_NAME], [Global Relationship Manager].[Memnor], [Local Relationship Manager].[MEMBER_UNIQUE_NAME], [Local Relationship Manager].[Memnor], [Relationship Manager].[MEMBER_UNIQUE_NAME], [Relationship Manager].[Memnor], [Period].[Default], [Report Type].[Default], [Year].[MEMBER_UNIQUE_NAME], [Year].[Memnor] on rows
    from [DICISRM.DICISRM]

    Ok Try this one
    But here you have to change the MDX formula every month.
    Year
    --FY2009
    --FY2010
    --FY2011
    --FY2012
    Period
    --TotalYear
    ----Qtr1
    -------Jan
    -------Feb
    -------Mar
    Let say if you're CurrentYear  is FY2011 and you're Current Month is March then you're MDX will be
    case when contains([Year].CurrentMember,MemberRange([FY2009],[FY2010])) and contains([Period].CurrentMember,MemberRange([Jan],[Feb]))
    Then
    B
    else
    C
    end
    For the Next month you just have to make a change in the MemberRange I.e.,(Replace Feb with Mar)
    *case when contains([Year].CurrentMember,MemberRange([FY2009],[FY2010])) and contains([Period].CurrentMember,MemberRange([Jan],[Mar]))*
    Then
    B
    else
    C
    end
    I tested it and Its working fine.
    I think this will solve you're problem but there might be a more elegant solution out there.
    Regards,
    RSG

  • Invoice Number and Performa invoice number

    Hi all
      I want to get the combination of invoice Number and Performa invoice number, I have got the VBFA Table, but it’s very slow, any alternative program or function please help me
    Thanks
    Kanishka

    Try AC_DOCUMENT_RECORD. It returns all kind of accounting documents.

  • How can I optimize my hard disk drive usage and performance in Windows 8 or Windows 7?

    QuestionHow can I optimize my hard disk drive usage and performance in Windows 8 or Windows 7?
    AnswerThere are a few simple steps you can take to ensure your hard disk drive is used optimally.
    Use Toshiba HDD Protection
    Many Toshiba laptops come with a program called Toshiba HDD Protection pre-installed. This program helps to protect your hard disk drive from being damaged due to falls or impacts. By default, it should already be enabled. You might be tempted to lower the detection levels in this application, but doing so could cause your hard disk drive to be damaged. Remember that while the application can reduce the chance of damage, you should still avoid allowing the laptop to fall or suffer rapid impacts.
    For more information on this utility, see the following article:
    TOSHIBA HDD Protection
    Optimize the drive
    Windows 8 and Windows 7 optimize hard disk drives automatically through a process called defragmentation. Unless you've disabled this, you don't need to do anything. If you have disabled this and want to run the process, you can still do so.
    In Windows 8, search for "Defrag" at the Windows Start screen and select "Defragment and optimize your drives."
    In Windows 7, search for "Defrag" in the Start Menu's search field and select "Disk defragmenter."
    You can use this tool to optimize your hard disk drives, allowing Windows to find needed files faster.
    Remove items from startup
    Some applications run automatically when Windows starts. This can add additional functionality, but it also decreases the performance of your computer. Sometimes you might want to disable certain programs from starting automatically.
    In Windows 8, search for "Task Manager" at the Start screen. Select the "Startup" tab. Select an application you'd like to disable from starting automatically and then click the "Disable" button in the lower-right.
    In Windows 7, type "msconfig" in the Start Menu's search field and press ENTER. Uncheck the boxes next to applications you'd like to disable from starting automatically.
    You should be sure of the purpose of an application before disabling it from starting automatically. Some applications might be important. If in doubt, you might consider searching on the Web to discover more information about a program. Remember that if you find that you disabled something vital, you can always re-enable it.
    For more information, please see the following video:

    QuestionHow can I optimize my hard disk drive usage and performance in Windows 8 or Windows 7?
    AnswerThere are a few simple steps you can take to ensure your hard disk drive is used optimally.
    Use Toshiba HDD Protection
    Many Toshiba laptops come with a program called Toshiba HDD Protection pre-installed. This program helps to protect your hard disk drive from being damaged due to falls or impacts. By default, it should already be enabled. You might be tempted to lower the detection levels in this application, but doing so could cause your hard disk drive to be damaged. Remember that while the application can reduce the chance of damage, you should still avoid allowing the laptop to fall or suffer rapid impacts.
    For more information on this utility, see the following article:
    TOSHIBA HDD Protection
    Optimize the drive
    Windows 8 and Windows 7 optimize hard disk drives automatically through a process called defragmentation. Unless you've disabled this, you don't need to do anything. If you have disabled this and want to run the process, you can still do so.
    In Windows 8, search for "Defrag" at the Windows Start screen and select "Defragment and optimize your drives."
    In Windows 7, search for "Defrag" in the Start Menu's search field and select "Disk defragmenter."
    You can use this tool to optimize your hard disk drives, allowing Windows to find needed files faster.
    Remove items from startup
    Some applications run automatically when Windows starts. This can add additional functionality, but it also decreases the performance of your computer. Sometimes you might want to disable certain programs from starting automatically.
    In Windows 8, search for "Task Manager" at the Start screen. Select the "Startup" tab. Select an application you'd like to disable from starting automatically and then click the "Disable" button in the lower-right.
    In Windows 7, type "msconfig" in the Start Menu's search field and press ENTER. Uncheck the boxes next to applications you'd like to disable from starting automatically.
    You should be sure of the purpose of an application before disabling it from starting automatically. Some applications might be important. If in doubt, you might consider searching on the Web to discover more information about a program. Remember that if you find that you disabled something vital, you can always re-enable it.
    For more information, please see the following video:

  • I have multiple apple ids was backing up everything to iCloud from iPhone 5. I just bought iPhone 6 and perform a iCloud restore. I received a message that all files did not restore.   How can I restore apps, files manually?

    I have multiple apple ids was backing up everything to iCloud from iPhone 5. I just bought iPhone 6 and perform a iCloud restore. I received a message that all files did not restore.   How can I restore apps, files manually?

    Yes - I connected my phone to my computer / Itunes and went into the apps section, but from there I have no idea how to manage the push notifications.  I even tryied going into itunes that is installed on my phone.  I still cannot find anyplace to manage these popups.  I have also gone into settings - notifiations - and tried turning all notifications for these apps all off but that didnt work either.  Any guidance is MUCH appreciated - Im not sure where to go from here.

  • Errors While Transporting Transformations and DTPs

    Hi Experts,
    Iu2019m trying to transport transformations and DTPs from DEV to QA and am getting the following error messages.  Does anyone know whatu2019s happening and how I can fix this?
    Thanks,
    Janice
    The following are excepts from the log of the transport organizer with further SAP supplied information on the error messages.
    Transformations:
    Start of the after-import method RS_TRFN_AFTER_IMPORT for object type(s) TRFN (Activation Mode) 
    Activation of Objects with Type Transformation                                                  
    Checking Objects with Type Transformation                                                       
    Checking Transformation 06C3WE26JQY0VSZPNMOZFKD0W416PRU4                                        
    No rule exists                                                                               
    Target RSDS 0ACCOUNT_TEXT QA1CLNT400 is not allowed                                             
    Target RSDS 0ACCOUNT_TEXT QA1CLNT400 is not allowed
    Message no. RSTRAN802
    No rule exists
    Message no. RSTRAN514
    DTPs:
    Start of the after-import method RS_DTPA_AFTER_IMPORT for object type(s) DTPA (Activation Mode)
    Conversion of T version from DTP DTP_4FG5GXT9OLNN3YM3QFMW9V3W5 to M version...                 
    Conversion of T version from DTP DTP_4FK894515QD0O9UM4JSJPIB91 to M version...                 
    Activation of Objects with Type Data Transfer Process                                          
    Saving Objects with Type Data Transfer Process                                                 
    Saving Data Transfer Process DTP_4FKN7KAKCOLQ3ROIB7U7DV13A                                     
    Transformation 06C3WE26JQY0VSZPNMOZFKD0W416PRU4 is inactive (cannot be executed)               
    Error saving Data Transfer Process DTP_4FKN7KAKCOLQ3ROIB7U7DV13A                               
    Transformation 06C3WE26JQY0VSZPNMOZFKD0W416PRU4 is inactive (cannot be executed)
    Message no. RSTRAN715
    Diagnosis
    The transformation 06C3WE26JQY0VSZPNMOZFKD0W416PRU4 is inactive (not executable).
    This can be caused, for example, by a change made to the source object or target object of the transformation.
    The transformation connects the source  to the target .
    Procedure
    The transformation must be reactivated.
    Error saving Data Transfer Process DTP_4FKN7KAKCOLQ3ROIB7U7DV13A
    Message no. RSO843

    Thanks so much everyone for your suggestions.  Iu2019ve tried them all and hereu2019s what I now have:
    Anil u2013 Active datasources were included in the original transport.  I tried retransporting and got the same error messages.
    Voodi u2013 0ACCOUNT_TEXT was resident in the Q system as a 3x datasource.  I replicated it as a 70 datasource in D and this is what was transported to Q.  It is a 7.0 datasource in Q.  I got the same error messages when I tried transporting just the transformations.
    Jayaram - 0ACCOUNT is active in the Q system and is currently being used in transformations other than the one Iu2019m trying to import.
    Godhuli u2013 I not only re-replicated the datasource in D, I reactivated the transformations and DTPs and resaved the InfoPackages.  I tried retransporting with a new transport request and it failed with the same error messages.
    Is there anything else anyone can think of?
    Thanks,
    Janice

  • Error While Transporting Transformation and DTP

    I am trying to transport Transformation and DTP.But its giving following error.
    <b>Start of the after-import method RS_TRFN_AFTER_IMPORT for object type(s) TRFN (Activation Mode)   
    InfoObject xxxx has a characteristic routine, but no rule                                  
    InfoObject xxxx has a characteristic routine, but no rule                                   
    InfoObject xxx has a characteristic routine, but no rule                                      
    InfoObject xxx has a characteristic routine, but no rule                                   
    No rule exists                                                                               
    Transformation  : Rule 1 could not be imported                                                    
    Transformation  : Rule 2 could not be imported                                                    
    Transformation  : Rule 8 could not be imported                                                    
    Transformation  : Rule 9 could not be imported                                                    
    Transformation  : Rule 60 could not be imported                                                   
    Transformation  : Rule 109 could not be imported                                                  
    Transformation  : Rule 128 could not be imported                                                                               
    Start of the after-import method RS_DTPA_AFTER_IMPORT for object type(s) DTPA (Activation Mode)   
    Conversion of T version from DTP xxxx to M version...                    
    An exception has occurred                                                                               
    </b>
    Any help.Thanks

    Hello,
    Do you have any routines in the transformation you are trying to transport. Looks like its missing some routines that needs to be transported.
    Hope that helps.

Maybe you are looking for