Publication filter with a lot of data

Hello,
I have a webi report that shows the following information for servers:
* File system on the server and their usage during the last month
* CPU usage during the last month
* RAM usage during the last month
I need to produce a report by server. Actually, I have +/- 1500 server.
I use a publication to filter the report and to add the correct name to the file.
The publication run correctly but I have the following question.
I have five query in the webi document. I know that is it possible to use the result of a query to filter the result of another query. But, when I personalize the query in the publication the other query in the report that use the personalize query doesn't use the personalized value. Thus, I need to personalize each query. Is there a better way ?
I think that the publication run the entire report and after the publication filter the report. Is it correct ? If yes, I have a problem because I have a huge quantity of data (I need to grow until +/- 4000 servers). Is it possible to personalized the report via a prompt ? I don't see how ? I see in the publication to set a prompt but it is impossible to personnalize the prompt via the dynamic recipient.
Any help ?
Thanks in advance
Dimitri

Hi Thomas,
What version of Visual Studio are you using?
What OS are you using?
What version of IIS?
I did a quick test using a simplied one liner WEB app and I see the same thing...
FYI, changes we've done is IIS recommends, actually it's a must, to load the report and viewer in the Page_Init section and not the Page_Load so you initialize all of the required dependencies, Page Load doesn't handle it properly...
I tried both, doesn't make any difference...
By the way your "large dataset" is not that big. I've seen people use a 10 meg xml files.
Try moving all of your code to the Page_Init section and test again to confirm...
I don't believe it has anything to do with the amount data directly, I believe it's the number of pages to render for a crosstab report.
I'll escalate this to DEV - ADAPT01726274
Set for SP 10 ( which means I set it to be fix in SP 10 but it does not mean it will be. All depends on DEV's work load )
Thanks
Don

Similar Messages

  • Always used 1 main account.  Started using individual user accounts. So how do I use software or applications with a lot of data like Quicken under my own user account?

    I recently upgraded our family's mac to OS X.  I thought this was the perfect time to create and use "user accounts".  We had always used 1 main account.  So how do I use software or applications with a lot of data like Quicken under my own user account?  I wanted to be able to manage my own itunes library, iphone apps, messages.  But I still really need to use the Stuff I have in Quicken essentials.  I don't want to have to restart all my work done in Quicken already.

    I haven't used Quicken in a while, but most applications store your files in your Documents folder. Is that where your Quicken data file is? What you do next depends on how many family members need to get at that data.
    If multiple family members need to use the Quicken data file, try moving it to the Documents folder in the Shared account. That is an account that all accounts can see. It's at the same level as the other accounts. In other words, Shared is one level up from your Home account, or Hard Drive/Users/Shared.
    If you're the only one allowed to see that Quicken data, move the Quicken data file from the old main account to your account, and don't leave a copy behind. You can use the Shared folder as a way station for the transfer since you won't be able to see both accounts' Documents folders at the same time (because you're not allowed to peek into other people's accounts). Or you can use another disk or server for the transfer, as long as you can get to it when logged into either account.

  • Best way to fill a datagird with A LOT of data?

    What's the best way to fill a datagrid with A LOT of data.
    I'm talking about something like 10,000 rows. Can the datagrid
    handle it? No screen is large enough to show 10,000 records..... is
    there a way to load 50, and then when a user scrolls down it loads
    the next 50 etc...?

    Right. It's not recommended that you load 10,000 rows into
    the datagrid at once. Using the data management service with paging
    enabled is a much better solution. See the Configuring the Data
    Management Service chapter in the Flex2 Developer's Guide for more
    info.

  • I recently got the iPhone 4s, but my iPod has a lot of apps with a lot of data on them and don't want to start over. I already synced my iPhone from the computer that had all my apps from my iPod, how do i transfer the data over, the other apps did.

    I recently got the iPhone 4s, but my iPod has a lot of apps with a lot of data on them and don't want to start over. I already synced my iPhone from the computer that had all my apps from my iPod, how do i transfer the data over, the other apps did. But some reason the app Clash Of Clans did not. Thank you.

    If I wiped my phone I wouldn't have the contacts on my phone to send to my self.    I would need to take just the contacts from my back up. I would have to do this through iTunes and I don't see how I can just extract the contacts only from my back up. From what I can figure out it is all or nothing

  • How to get around a performance issue when dealing with a lot of data

    Hello All,
    This is an academic question really, I'm not sure what I'm going to do with my issue, but I have some options.  I was wondering if anyone would like to throw in their two cents on what they would do.
    I have a report, the users want to see all agreements and all conditions related to the updating of rebates and the affected invoices. From a technical perspective ENT6038-KONV-KONP-KONA-KNA1.  THese are the tables I have to hit.  The problem is that when they retroactively update rebate conditions they can hit thousands of invoices, which blossoms out to thousands of conditions...you see the problem. I simply have too much data to grab, it times out.
    I've tried everything around the code.  If you have a better way to get price conditions and agreement numbers off of thousands of invoices, please let me know what that is.
    I have a couple of options.
    1) Use shared memory to preload the data for the report.  This would work, but I'm not going to know what data is needed to be loaded until report run time. They put in a date. I simply can't preload everything. I don't like this option much. 
    2) Write a function module to do this work. When the user clicks on the button to get this particular data, it will launch the FM in background and e-mail them the results. As you know, the background job won't time out. So far this is my favored option.
    Any other ideas?
    Oh...nope, BI is not an option, we don't have it. I know, I'm not happy about it. We do have a data warehouse, but the prospect of working with that group makes me whince.

    My two cents - firstly totally agree with Derick that its probably a good idea to go back to the business and justify their requirement in regards to reporting and "whether any user can meaningfully process all those results in an aggregate". But having dealt with customers across industries over a long period of time, it would probably be bit fanciful to expect them to change their requirements too much as in my experience neither do they understand (too much) technology nor they want to hear about technical limitations for a system etc. They want what they want if possible yesterday!
    So, about dealing with performance issues within ABAP, I'm sure you must be already using efficient programming techniques like using Hash internal tables with Unique Keys, accessing rows of the table using Field-Symbols and all that but what I was going to suggest to you is probably look at using [Extracts|http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9ed135c111d1829f0000e829fbfe/content.htm]. I've had to deal with this couple of times in the past when dealing with massive amount of data and I found it to be very efficient in regards to performance. A good point to remember when using Extracts that, I quote from SAP Help, "The size of an extract dataset is, in principle, unlimited. Extracts larger than 500KB are stored in operating system files. The practical size of an extract is up to 2GB, as long as there is enough space in the filesystem."
    Hope this helps,
    Cheers,
    Sougata.

  • Frequent index synchronization for table with a lot of data

    Hello! I have a table with text column to be indexed. There are a lot of records in the table and the texts in the text column are ofter big enough. The number of records are rising frequently and i need actual search results in any time, so i cant synchronize index 1 or 2 times per day, i need to do it after every insert and i also need to base table not to be locked in anytime i synchronizing index.
    So i read the documentation and find a way for me like that:
    after every insert on base table do:
    ctx_ddl.sync_index('my_index')
    And seems like i need to make ctx_ddl.optimize_index() after synchronization.
    So will it work fine or i should go another way?
    Resuming main goals:
    1) I need no locks on base table (or locks on VERY small time);
    2) Always actual search results.

    You can use sync(on commit) in your index parameters, which will make recent DML searchable immediately. You will still need to optimize and rebuild periodically to eliminate index fragmentation. The more frequently that you synchronize, the more fragmented your index will become. Index fragmentation slows down your queries that use the index. If you have some downtime or slow times such as weekends or nights, then that is a good time to do such things. If not, then you may wish to do so "online" to avoid interfering with searches or DML. Depending on the size of your table and the amount of DML, you may choose to optimize and/or rebuild hourly or daily or weekly or monthly or whatever.

  • Cross-tab with a lot of data causes fault in crpe32.dll when exporting to Excel

    Hello.  I believe I found a bug in the Service Pack 9.  After upgrading my production to runtime 9, I found that cross-tab reports with large datasets crashed the website.  Per the Windows Application Log, a fault happened in the crpe32.dll.  So I uninstalled the runtimes for 9 and re-installed 8 and the reports worked again.  My production uses the 64 bit runtimes.
    I created a test site which includes a two datasets one large and one small with two reports, one cross-tab and another non-cross-tab.  The site has four pages, one to open the cross-tab report using the large dataset, another with the same cross-tab report using the small dataset, another for the non-cross-tab report using the large dataset and the last with the non-cross-tab report using the small dataset.  Only the cross-tab report using the large dataset causes the fault.  I’ve attached the test site for you use as needed.

    Hi Thomas,
    What version of Visual Studio are you using?
    What OS are you using?
    What version of IIS?
    I did a quick test using a simplied one liner WEB app and I see the same thing...
    FYI, changes we've done is IIS recommends, actually it's a must, to load the report and viewer in the Page_Init section and not the Page_Load so you initialize all of the required dependencies, Page Load doesn't handle it properly...
    I tried both, doesn't make any difference...
    By the way your "large dataset" is not that big. I've seen people use a 10 meg xml files.
    Try moving all of your code to the Page_Init section and test again to confirm...
    I don't believe it has anything to do with the amount data directly, I believe it's the number of pages to render for a crosstab report.
    I'll escalate this to DEV - ADAPT01726274
    Set for SP 10 ( which means I set it to be fix in SP 10 but it does not mean it will be. All depends on DEV's work load )
    Thanks
    Don

  • Crystal generates empty pages for a datasource with a lot of data

    Hi, I hope someone have some hints to solve my problem with generating large reports.
    I have SAP Crystal Reports runtime engine for .NET Framework 4 32-bit installed on my Microsoft Windows 2008 R2 Enterprise Server.
    Since I update Crystal Reports to 13.0.2.469 large Reports have only empty pages. If I print those Reports the data are visible, the report is displayed correct.
    I solved the problem temporarily by setting SeparatePages to true for the CrystalReportViewer Control, but for the future I need the complete report displayed at once.
    I would be very pleased about your help / hints.
    Regards, Sabine

    Thank you for your answer.
    My report has only one cross table in it. I can generate it for a dataset with 26 rows. If there are 27 rows the report is empty.
    If I use a diffrent report without a crosstab only with repeatable sections and less data, I can make a select for 150 rows. That shows me 5 - 7 pages.
    It also makes a difference how the page settings are. If I configurate a smaller border for the top and bottom border I need to change my select to select less record rows and if I configurate a greater border there can be more record rows.
    Here an example how I can reproduce my problem:
    using System.Web;
    using System.Web.UI;
    using System.Web.UI.WebControls;
    using CrystalDecisions.CrystalReports.Engine;
    using System.Data;
    using System.Net;
    namespace EASWebOffice.Pages
        public partial class ReportTest : System.Web.UI.Page
            private HttpWebRequest request;
            protected void Page_Load(object sender, EventArgs e)
                if (!Page.IsPostBack)
                    request = (HttpWebRequest)WebRequest.Create(@"http://localhost:54332/Pages/ReportTest.aspx");
                    AddOnPreRenderCompleteAsync((BeginEventHandler)BeginAsyncOperation, (EndEventHandler)EndAsyncOperation);
                    if (Session["Report"] != null)
                        this.CrystalReportViewer1.ReportSource = Session["Report"];
            IAsyncResult BeginAsyncOperation(object sender, EventArgs e, AsyncCallback cb, object state)
                if (Session["InitialiseReport"] == null)
                    ProcessingReportData();
                return request.BeginGetResponse(cb, state);
            void EndAsyncOperation(IAsyncResult ar)
                if (Session["InitialiseReport"] == null)
                    Response.AddHeader("Refresh", "1");
                    Session["InitialiseReport"] = "performed";
            private void ProcessingReportData()
                CrystalReport2 report = new CrystalReport2();
                // 26 rows works, 27 shows an empty report
                string sql = "select top 26 Auftraggeber as Customer, Kostenträgername as Project, Beschreibung as Description, Stundenanzahl as Hours, UMusername as Ressource from V_MonthEvaluation";
                DataTable data = WebSQLTools.ExecuteToDataTable(sql, Global.Connection, Global.Logger, "");
                DataSet dataSet = new DataSet("DataSet1");
                data.TableName = "DataTable1";
                dataSet.Tables.Add(data);
                report.SetDataSource(dataSet);
                Session["Report"] = report;
    Do someone have a similar problem? Is there a solution?
    Edited by: wohlsobena on Jan 16, 2012 3:34 PM

  • How can I get a date picker or wheel to show up in the form to make filling in a form with a lot of

    how can I get a date picker or wheel to show up in the form to make filling in a form with a lot of dates more easy?

    There is no built-in date picker available for forms created in Acrobat. There are some third-party solutions involving JavaScript (either a large collection of fields or a custom dialog) though. Also, text fields with date formatting will use a wheel type date picker on Android/iOS devices with Adobe Reader.

  • Enhancement request : Table - Data  -- Filter with multiple lines

    It would be nice when the Filter option in the tables Data tab could contain multiple lines, because :
    In case of a complex where clause on a table (used in a query) it's hard to get an impression of the filter because everything is typed on one single line.
    In case of multiple lines it's also more easy to disable a single clause temporary with a double dash (--) at the start of the line instead of searching in a complex filter where to put /* ...*/
    Eric.

    What I mean is the Filter option when displaying the data (Data tab) of a table. You have the possibility to enter a Sort and a Filter (to reduce the data that is displayed). That Filter option has no possibility to enter multiple lines. So there is no way to 'format' the filter which would be nice in case of complex or large 'where clauses'.
    What would you prefer, this (multiple lines) ?
    AND ( p_peildatum IS NULL
    OR
    TRUNC(p_peildatum) BETWEEN TRUNC(dnm.datum_ingang)
    AND TRUNC(NVL(dnm.datum_einde,p_peildatum))
    AND ( p_ondergrens IS NULL
    OR
    TRUNC(p_ondergrens) <= TRUNC(NVL(dnm.datum_einde,p_ondergrens))
    or this, like it is now (everything on one single line) :
    ( p_peildatum IS NULL OR TRUNC(p_peildatum) BETWEEN TRUNC(dnm.datum_ingang) AND TRUNC(NVL(dnm.datum_einde,p_peildatum)) ) AND ( p_ondergrens IS NULL OR TRUNC(p_ondergrens) <= TRUNC(NVL(dnm.datum_einde,p_ondergrens)) )
    Eric.

  • Using Non-destructive filter with Nested XML data

    Hi,
    How do you use Non-destructive filter with Nested XML data?
    I am using the non-destructive filter sample with my own xml which is setup to search for the <smc></smcs> in my xml below. But when i test it it only searches the last row of the "smc". How can i make it work so it can search for repeating nodes? or does it have something to with how my xml is setup?
        <ja>
            <url>www.sample.com</url>
            <jrole>Jobrole goes here</jrole>
            <prole>Process role goes here...</prole>
            <role>description...</role>
            <prole>Process role goes here...</prole>
            <role>description....</role>
            <prole>Process role goes here...</prole>
            <role>description...</role>
            <sjc>6K8C</sjc>
            <sjc>6B1B</sjc>
            <sjc>6B1F</sjc>
            <sjc>6B1D</sjc>
            <smc>6C9</smc>
            <smc>675</smc>
            <smc>62R</smc>
            <smc>62P</smc>
            <smc>602</smc>
            <smc>622</smc>
            <smc>642</smc>
            <smc>65F</smc>
            <smc>65J</smc>
            <smc>65L</smc>
            <smc>623</smc>
            <smc>625</smc>
            <smc>624</smc>
            <smc>622</smc>
            <audience>Target audience goes here....</audience>
        </ja>
    here is the javascript that runs it.
    function FilterData()
        var tf = document.getElementById("filterTF");
        if (!tf.value)
            // If the text field is empty, remove any filter
            // that is set on the data set.
            ds1.filter(null);
            return;
        // Set a filter on the data set that matches any row
        // that begins with the string in the text field.
        var regExpStr = tf.value;
    if (!document.getElementById("containsCB").checked)
            regExpStr = "^" + regExpStr;
        var regExp = new RegExp(regExpStr, "i");
        var filterFunc = function(ds, row, rowNumber)
            var str = row["smc"];
            if (str && str.search(regExp) != -1)
            return row;
            return null;
        ds1.filter(filterFunc);
    function StartFilterTimer()
        if (StartFilterTimer.timerID)
            clearTimeout(StartFilterTimer.timerID);
        StartFilterTimer.timerID = setTimeout(function() { StartFilterTimer.timerID = null; FilterData(); }, 100);
    I really need help on this, or are there any other suggestions or samples that might work?
    thank you!

    I apologize, im using Spry XML Data Set. i guess the best way to describe what im trying to do is, i want to use the Non-desctructive filter sample with the Spry Nested XML Data sample. So with my sample xml on my first post and with the same code non-destructive filter is using, im having trouble trying to search repeating nodes, for some reason it only searches the last node of the repeating nodes. Does that make sense? let me know.
    thank you Arnout!

  • Looking for a form in the VIS demo database with lots of data

    Hi all,
    I am looking for a form in the VIS demo database (that ships with Oracle Applications) that contains a lot (1 MB or more) of data. Specifically, when I perform some type of search in the form, I would like more than 1 MB of data to be transferred over the network to the client.
    With Oracle Applications 11i, I was using this form: CRM Resource Manager, Vision Enterprises &rarr; Resources
    This was nice because searching for all resources resulted in 4.6 MB of data being sent by the server. However, in Oracle Applications R12, the same query in the same form only sends about 100 KB of data. :-(
    Does anyone know any other forms that have lots of data in them?
    - Kyle

    Thanks for that - I found the form. However, it appears the database table is empty - when I perform a search for all jobs, no records are returned.Nevermind, after some trial and error I found a site that had lots of jobs defined. Thanks for the pointer!
    I'm still running into the same problem I was seeing with the previous form I was using. This is what I'm doing:
    1. Open the form.
    2. Search for all records.
    3. Jump to the very last record.
    In Oracle Applications 11i, this would cause a large amount of data to be transferred over the network (4.6 MB for the previous form I was using and 2.6 MB for this new "Discrete Jobs" form I'm trying). However, the behavior is different in Oracle Applications R12. When I perform the same steps in the same forms, only a fraction of the data is transferred over the network (~100 KB).
    Ideally, I would like a set of steps (and a form) that will produce the same results in both 11i and R12. More realistically, I just need a form (or a new test procedure) in R12 that will spit out more than 1 MB of data.

  • I have been a loyal customer for years and have a hot spot because internet options are very limited in our rural area.  I have stayed with 10G since the beginning.  Most months I don't use the 10G.  Verizon has no problem with me paying for Data I don't

    I have been a loyal customer for years and have a hot spot because internet options are very limited in our rural area.  I have stayed with 10G since the beginning.  Most months I don't use the 10G.  Verizon has no problem with me paying for Data I don't use.  This month my daughter comes home from College and she accidentally uses 24G.....14 over get blind sided with $140.00 in overages.  I called Verizon today......stopped in to the store today......My neighbor told me they waved charges for her one time.  Lots of charges.....But I'm am getting no help.  Not even an offer of a payment plan to help me out.  I went back just 6 months and I have over paid for 19G.....Seems like they would like to help out their loyal customers!!!!     Does anybody have any suggestions on how to deal with them?   We are not wealthy....or I would just pay this and walk away......

    There is a big misconception in what customers believe a cell carrier is obligated to do.
    You pay a set price to use up to that amount of xx data. It makes no difference if you use it to the paid limit or way under. Its like peace of mind when you don't have to worry about a data counter.
    Your daughter used the data, your plan is quite clear of what overage charges are. Why should or would Verizon wireless just forgive the charges because you are a customer? Your daughter used the data, get the money from her. That is the responsible thing to do.
    There is no "I have been a loyal customer so please remove the $120, or $250, or $2,000.00 since I did not mean to use it"
    Your electric company, or gas company or any other company does not remove valid charges. Why should Verizon wireless?
    Just pay the invoice and don't think you are being mistreated because Verizon is a business and not a charity.
    Good Luck

  • Filter Idoc segment based on date in XSLT map

    Hi,
    In a Idoc to flat file XSLT mapping, I have a requirment to filter segment out of multiple segment occurance in HRMD_A idoc. Idoc has two date fields(actually string data, containing date in YYYYMMDD format) in the segment(end date and start date). I need to filter only one segment from multiple segments where current date falles within start and end dates (end date >= current date >= start date). Then map output fields from the filtered segment.
    Its easy doing it in graphical mapping, but its difficult to use graphical mapping here as the message structure are enormous. I dont have much hands on in XSLT but think many of you have been through this kind of requirement using XSLT. So holding my hope high :). Please suggest a logic for this, will be highly appretiated.
    Regards
    Suman.

    This functions will give you the current date:
    fn:current-dateTime()     => Returns the current dateTime (with timezone)
    fn:current-date()                 => Returns the current date (with timezone)
    fn:current-time()                 => Returns the current time (with timezone)
    To compare:
    fn:compare(comp1,comp2)
    fn:compare(comp1,comp2,collation)
    => Returns -1 if comp1 is less than comp2, 0 if comp1 is equal to comp2, or 1 if comp1 is greater than comp2 (according to the rules of the collation that is used)
    Example: compare('ghi', 'ghi')
    Result: 0
    Also I suggest working with an IF condition like this:
    <xsl:if test="price &gt; 10">
            <tr>
              <td><xsl:value-of select="title"/></td>
              <td><xsl:value-of select="artist"/></td>
            </tr>
          </xsl:if>
    Edited by: Kai Lerch-Baier on Apr 14, 2009 10:21 AM

  • HI, Idid a up grade of my osx mauntain lion and it cancelled all my document, bookmarks and mailboxes it is normal? I have lost a lot of data bacons of it because i did not do the back up of all my files just the back up of my photos. . .what can i do

    HI,
    Since a while I have received from apple the warning for an up grade for my Os x Mountain Lion and finally I decider to do it. . . . and since it was just an up grade I DIDN'T DO A BACK UP of all the content of my book pro because I thought it wasn't necessary . . . .but when I did it . . . . .it cancelled me all my files in documents, most of all my mail boxes in mail and most of my bookmarks in safari.
    I do not understand why; since i did not change my os but just did an up grade of my existing one. . . . . .result I lost a lot of data and I do not know what to do.
    Is this normal that it cancelled data in my documents folders and in others parts of my mac pro ?? ? ? . . . .ad if it is so why don't you write a warning saying that before going through  with a simple up grade  of your OS of your computer everybody should do a back up of everything it has in it ?!?!?!
    Buongiorno,
    Da qualche tempo mi era avvivato l'annuncio da vii di apple che c'era un ap grade per il mil system operative per il mio Os x mountain lion e alls fine ho deciso di faro . . . . .siccome era solo un au grade del systema operativo e non un cambiamento per un Os più recente non ho fat to un back up di tutti il contenuto del mio mac pro . . . .ma dope averlo fat to l'up grade ho realizzato che il procedimento mi aveva cancellato tutti i file contenuti nei midi document, le mailbox che avevo create in mail e i bookmarks che avevo create in safari . . . . . .e forse anche altre cose che non ho ancora visto.
    Ma è normale che questo succeda e se si perchè insiemme all'annuncio di dover fare un'up grade non mettete anche una segnalazione sul fatto che prima di procedere è consigliato  fare un back up di tutti i datti che si ha nel computer ?
    Io adesso mi trovo nella scomoda posizione di aver perso dei dati importanti con perdita di tempo e lavoro.

    What happened isn't normal, but does demonstrate why you need a backup, preferably two.
    Data Recovery – Best
    Data Recovery – Disk Drill
    Data Recovery – Data Rescue
    Data Recovery – File Salvage
    Data Recovery – Stellar Phoenix
    Data Recovery - uFlysoft
    Data Recovery - Recovering Deleted Files
    Data Recovery - Recovering Deleted Files (2)

Maybe you are looking for