Example of accessing relational data in Tangosol Cache

Hi,
I was hoping for an example for how I can access a tangosol cache with an SQL like query. It would be great to see how to do something like:
select field3 from table where field1='value' and field2='value'
pat

Hello,
Please see this link:
http://wiki.tangosol.com/display/COH33UG/Querying+the+Cache
Thanks,
Patrick

Similar Messages

  • How do I access the data saved on my time capsule?  for example, if i try to view my photos it tells me that i have 2.4gb of photos and gives me the option to "view with iPhoto"....but when i click on "view with iPhoto" nothing happens...  thanks!

    how do i access the data stored on my time capsule?  for example, if i try to view a photo, i get the option to "view with iphoto" but when i click on "view with iphoto" nothing happens.  i'm having similar trouble openning up movies, music, etc.  it appears as though the files exist / saved on the time capsule (because i can see how much storage they consume), but i can't open them.  thanks!

    Unfortunately, Apple removed the feature to "browse backups" in Time Machine backups of iPhoto a few years ago.
    You have to restore the entire iPhoto library to a separate location to be able to "see" it, and then choose the specific images that you want use.
    For complete details, see #15 in Pondini's excellent support document Time Machine -- FAQ.  Check the information in the pink box there.
    Frequently Asked Questions

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • How to acces and display datas storaged in cache for a SUP 2.0 workflow?

    HI to all.
    I have an application with a item menu which obtains data thought a online request. the result is shown is a listview.
    My problem is when my BlackBerry has no conection ( offline scenario). When I select the menu item, I obtain an error.
    How to acces and display datas storaged in cache for my MBO? I have read that I can use getMessageValueCollection in custom.js to access to my datas but once I get the datas, How can associate those datas to a Listview like a online request?? Do i have to develop my own screen in html or how?
    Thanks.

    I'm not entirely clear on what you mean by "cache" in this context.  I'm going to assume that what you are really referring to is the contents of the workflow message, so correct me if I'm wrong.  There is, in later releases, the ability to set an device-side request cache time so that if you issue an online request it'll store the results in an on-device cache and if you subsequently reissue the same online request with the same parameter values within that timeout period it'll get the data from the cache rather than going to the server, but my gut instinct is that this is not what you are referring to.
    To access the data in the workflow message, you are correct, you would call getMessageValueCollection().  It will return an object hierarchy with objects defined in WorkflowMessage.js.  Note that if your online request fails, the data won't magically appear in your workflow message.
    To use the data in the workflow message to update a listview, feel free to examine the code in the listview widgets and in API.js.  You can also create a custom listview as follows:
    function customBeforeNavigateForward(screenKey, destScreenKey) {
         // In this example, we only want to replace the listview on the "My Approvals" screen    
         if (destScreenKey == 'My_Approvals'){
              // First, we get the MessageValueCollection that we are currently operating on
              var message = getCurrentMessageValueCollection();
              // Next, we'll get the list MessageValue from that MessageValueCollection
              var itemList = message.getData("LeaveApprovalItem3");
              // Because its a list, the Value of the MessageValue will be an array
              var items = itemList.getValue();
              // Figure out how many items are in the list
              var numOfItems = items.length;
              // Iterate through the results and build our list
              var i = 0;
              var htmlOutput = '<div><ul data-role="listview" data-theme="k" data-filter="true">';
              var firstChar = '';
              while ( i < numOfItems ){
                   // Get the current item. This will be a MessageValueCollection.
                   var currItem= items<i>;
                   // Get the properties of the current item.
                   var owner = currItem.getData("LeaveApprovalItem_owner_attribKey").getValue();
                   var type = currItem.getData("LeaveApprovalItem_itemType_attribKey").getValue();
                   var status = currItem.getData("LeaveApprovalItem_itemStatus_attribKey").getValue();
                   var startDate = currItem.getData("LeaveApprovalItem_startDate_attribKey").getValue();
                   var endDate = currItem.getData("LeaveApprovalItem_endDate_attribKey").getValue();
                   // Format the data in a specific presentation
                   var formatStartDate = Date.parse(startDate).toString('MMM/d/yyyy');
                   var formatEndDate = Date.parse(endDate).toString('MMM/d/yyyy');
                   // Decide which thumbnail image to use
                   var imageToUse = ''
                        if (status == 'Pending'){
                             imageToUse = 'pending.png';
                        else if (status == 'Rejected'){
                             imageToUse = 'rejected.png';
                        else {
                             imageToUse = 'approved.png';
                   // Add a new line to the listview for this item
                   htmlOutput += '<li><a id ="' + currItem.getKey() + '" class="listClick">';
                   htmlOutput += '<img src="./images/' + imageToUse + '" class="ui-li-thumb">';
                   htmlOutput += '<h3 class = "listTitle">' + type;
                   htmlOutput +=  ' ( ' + owner + ' ) ';
                   htmlOutput += '</h3>';
                   htmlOutput += '<p>' + formatStartDate + ' : ' + formatEndDate + '</p>';
                   htmlOutput += '</a></li>';
                   i++;
              htmlOutput += '</ul></div>';
              // Remove the old listview and add in the new one.  Note: this is suboptimal and should be fixed if you want to use it in production.
              $('#My_ApprovalsForm').children().eq(2).hide();
              $('#My_ApprovalsForm').children().eq(1).after(htmlOutput);
              // Add in a handler so that when a line is clicked on, it'll go to the right details screen
              $(".listClick").click(function(){
                   currListDivID = $(this).parent().parent();
                   $(this).parent().parent().addClass("ui-btn-active");
                   navigateForward("Request_Details",  this.id );
                   if (isBlackBerry()) {
                        return;
         // All done.
         return true;

  • Store XML data in java cache (hashmap as a key value pair)

    Hi,
    I have to store a xml file in java cache so that I can resue it .The flow is like this :
    DAO layer reads database ,create an xml and sends to --> IBM MQ-->our java code should read this xml file over MQ and store it in a cache (preferably hashmap).The file contain unique id for every customer.
    How can we achieve this.One way is to store the xml as an string as key is "id" and value is whole xml.Is this a good way or any other way is available.Please suggest.
    Sample xml:
    <Client>
    <ClientId>1234</ClientId>
    <ClientName>STechnology</ClientName>
    <ID>10</ID>
    <ClientStatus>ACTIVE</ClientStatus>
    - <LEAccount>
    <ClientLE>678989</ClientLE>
    <LEId>56743</LEId>
    - <Account>
    <AccountNumber>9876543678</AccountNumber>
    </Account>
    </LEAccount>
    - <Service>
    <Cindicator>Y2Y</Cindicator>
    <PrefCode>980</PrefCode>
    <BSCode>876</BSCode>
    <MandatoryContent>MSP</MandatoryContent>
    </Service>
    </Client>
    Thanks
    Sumit

    A HashMap can work, but then still store the customer related data in a bean (and perhaps it will have some child objects as well, if for example the service subelement can repeat). So you get a HashMap of Customer objects, with the clientID as the index into the hashmap.

  • Is it possible to integrate relational data with OLAP cubes?

    I have a web application that accesses cubes created from AWM via the OLAP API. I need to integrate a column from a relational table in the front application and display the column along side cube data.
    Is there any way to achieve the functionality from the OLAP API?

    Can you explain how the relational data source relates to the OLAP data, is it a master-detail relationship? If this is the case then you could consider the following:
    1) Depending on how you are displaying the OLAP data. If you are using a non-BI Beans presentation bean then if the keys are consistent across both data sources it should be possible to create two separate queries and glue them together using the common keys within your data source module.
    2) Alternatively, you create a custom text measure within AWM and then use OLAP DML to extract the detail data and load it into a multi-line text variable that could be retrieved via OLAPI. This might not work if there is a large number of rows within the text variable to retrieve as formatting the results within your application might get complicated. The OLAP DML Help contains a lot of excellent examples that will help you create a program that uses SQL commands to load data.
    Hope this helps
    Keith Laker
    Oracle EMEA Consulting
    BI Blog: http://oraclebi.blogspot.com/
    DM Blog: http://oracledmt.blogspot.com/
    BI on Oracle: http://www.oracle.com/bi/
    BI on OTN: http://www.oracle.com/technology/products/bi/
    BI Samples: http://www.oracle.com/technology/products/bi/samples/

  • Unable to access the data from Data Management Gateway: Query timeout expired

    Hi,
    Since 2-3 days the data refresh is failing on our PowerBI site. I checked below:
    1. The gateway is in running status.
    2. Data source is also in ready status and test connection worked fine too.
    3. Below is the error in System Health -
    Failed to refresh the data source. An internal service error has occurred. Retry the operation at a later time. If the problem persists, contact Microsoft support for further assistance.        
    Error code: 4025
    4. Below is the error in Event Viewer.
    Unable to access the data from Data Management Gateway: Query timeout expired. Please check 1) whether the data source is available 2) whether the gateway on-premises service is running using Windows Event Logs.
    5. This is the correlational id for latest refresh failure
    is
    f9030dd8-af4c-4225-8674-50ce85a770d0
    6.
    Refresh History error is –
    Errors in the high-level relational engine. The following exception occurred while the managed IDataReader interface was being used: The operation has timed out. Errors in the high-level relational engine. The following exception occurred while the
    managed IDataReader interface was being used: Query timeout expired. 
    Any idea what could have went wrong suddenly, everything was working fine from last 1 month.
    Thanks,
    Richa

    Never mind, figured out there was a lock on SQL table which caused all the problems. Once I released the lock it PowerPivot refresh started working fine.
    Thanks.

  • PA_CONTRACT_XSLFO: How to invoke a RTF-template with related data template

    Dear Reader,
    actually I want to extend the standard Document Type Layout for a Purchase Agreement Contract with additional data from approved supplier list (ASL).
    Therefor I have created a RTF-template and a data template with the needed sql-statement. For testing I put this in a standalone concurrent programm and it works fine (result was a blue table with all data rows).
    Next step for me was to invoke the RTF-template into the PA_CONTRACT_XSLFO template for extending the Document Type Layout for my Purchase Agreement Contract. So I put the neede invoke-statements
    <xsl:import href="xdo://XXOC.XX_RTF_TEMPLATE.de.00/"/>
    and
    <xsl:call-template name="XX_RTF_TEMPLATE"/>
    into the XSLFO-template. Also I extend the RTF-template with the define template statement
    <?template:XX_RTF_TEMPLATE?>
    So all seems to be fine.
    As result I get the standard document for Purchase Agreement Contract with the additional blue table from RTF-template BUT WITHOUT DATA !
    From my point of view there is no execution of the sql-statement in data template. But I dont know why.
    Do Oracle support a combination of XSLFO-template with data template?
    [XSLFO-template] with related [XSD-data definition]
    calls [RTF-template] with related [data template (with included sql-statement)]
    Thanks for your help.
    Best regards
    Mario.

    How to call a rtf template from another rtf template by passing a value try in main template create hyperlink of url with parameters for another template
    http://bipconsulting.blogspot.ru/2010/02/drill-down-to-detail-or-another-report.html
    When user pull a quote report from siebel this new rtf template should attach to the quote at the end.it'll be only another report
    IMHO you can not attach it to main. it'll be second independent report
    you can try subtemplate but it's not about rtf from rtf by click
    it's about call automatically rtf subtemplate from main rtf based on some conditions
    for example, main template contain some data and if some condition is true then call subtemplate and place it instead of its condition

  • Excel, PowerView error in SharePoint 2013: "An error occurred while loading the model for the item or data source 'EntityDataSource'. Verify that the connection information is correct and that you have permissions to access the data source."

    I've installed SQL Server 2012 SP1 + SP server 2012 + SSRS and PowerPivot add-in.
    I also configured excel services correctly. Everything works fine but the powerview doesn't work!
    While I open an excel workbook consist of a PowerView report an error occurs: "An error occurred while loading the model for the item or data source 'EntityDataSource'. Verify that the connection information is correct and that you have permissions
    to access the data source."
    error detail: 
    <detail><ErrorCode xmlns="http://www.microsoft.com/sql/reportingservices">rsCannotRetrieveModel</ErrorCode><HttpStatus xmlns="http://www.microsoft.com/sql/reportingservices">400</HttpStatus><Message xmlns="http://www.microsoft.com/sql/reportingservices">An
    error occurred while loading the model for the item or data source 'EntityDataSource'. Verify that the connection information is correct and that you have permissions to access the data source.</Message><HelpLink xmlns="http://www.microsoft.com/sql/reportingservices">http://go.microsoft.com/fwlink/?LinkId=20476&amp;EvtSrc=Microsoft.ReportingServices.Diagnostics.Utilities.ErrorStrings&amp;EvtID=rsCannotRetrieveModel&amp;ProdName=Microsoft%20SQL%20Server%20Reporting%20Services&amp;ProdVer=11.0.3128.0</HelpLink><ProductName
    xmlns="http://www.microsoft.com/sql/reportingservices">Microsoft SQL Server Reporting Services</ProductName><ProductVersion xmlns="http://www.microsoft.com/sql/reportingservices">11.0.3128.0</ProductVersion><ProductLocaleId
    xmlns="http://www.microsoft.com/sql/reportingservices">127</ProductLocaleId><OperatingSystem xmlns="http://www.microsoft.com/sql/reportingservices">OsIndependent</OperatingSystem><CountryLocaleId xmlns="http://www.microsoft.com/sql/reportingservices">1033</CountryLocaleId><MoreInformation
    xmlns="http://www.microsoft.com/sql/reportingservices"><Source>ReportingServicesLibrary</Source><Message msrs:ErrorCode="rsCannotRetrieveModel" msrs:HelpLink="http://go.microsoft.com/fwlink/?LinkId=20476&amp;EvtSrc=Microsoft.ReportingServices.Diagnostics.Utilities.ErrorStrings&amp;EvtID=rsCannotRetrieveModel&amp;ProdName=Microsoft%20SQL%20Server%20Reporting%20Services&amp;ProdVer=11.0.3128.0"
    xmlns:msrs="http://www.microsoft.com/sql/reportingservices">An error occurred while loading the model for the item or data source 'EntityDataSource'. Verify that the connection information is correct and that you have permissions to access the
    data source.</Message><MoreInformation><Source>Microsoft.ReportingServices.ProcessingCore</Source><Message msrs:ErrorCode="rsErrorOpeningConnection" msrs:HelpLink="http://go.microsoft.com/fwlink/?LinkId=20476&amp;EvtSrc=Microsoft.ReportingServices.Diagnostics.Utilities.ErrorStrings&amp;EvtID=rsErrorOpeningConnection&amp;ProdName=Microsoft%20SQL%20Server%20Reporting%20Services&amp;ProdVer=11.0.3128.0"
    xmlns:msrs="http://www.microsoft.com/sql/reportingservices">Cannot create a connection to data source 'EntityDataSource'.</Message><MoreInformation><Source></Source><Message>For more information about this error navigate
    to the report server on the local server machine, or enable remote errors</Message></MoreInformation></MoreInformation></MoreInformation><Warnings xmlns="http://www.microsoft.com/sql/reportingservices" /></detail>
    Please help me to solve this issue. I don't know if uploading the excel workbook is enough or maybe It needed to connect to another data source.
    I Appreciate in advance.

    Hi Ali.y,
    Based on the current error message, the error can be related to the
    Claims to Windows Token Service (C2WTS) and is an expected error under certain conditions. To verify the issue, please check the aspects below:
         1. The C2WTS Windows service and C2WTS SharePoint service are both running.
         2. Check the SQL Server Browser service is running on the machine that has the PowerPivot instance of SSAS.
         3. Check the domain. You're signing into SharePoint with a user account in some domain (call it Domain A).  When Domain A is equal to Domain B which SharePoint server itself is located (they're the same domain), or Domain
    A trusts Domain B.
    In addition, the error may be caused by Kerberos authentication issue due to missing SPN. In order to make the Kerberos authentication work, you need to configure the Analysis Services to run under a domain account, and register the SPNs for the Analysis
    Services server.
    To create the SPN for the Analysis Services server that is running under a domain account, run the following commands at a command prompt:
    • Setspn.exe -S MSOLAPSvc.3/Fully_Qualified_domainName OLAP_Service_Startup_Account
    Note: Fully_Qualified_domainName is a placeholder for the FQDN.
    • Setspn.exe -S MSOLAPSvc.3/serverHostName OLAP_Service_Startup_Account
    For more information, please see:
    How to configure SQL Reporting Services 2012 in SharePoint Server 2010 / 2013 for Kerberos authentication
    Regards,
    Heidi Duan
    Heidi Duan
    TechNet Community Support

  • How to retrive relational data from an XMLType column in Oracle 10g R2

    Hi
    I want how to retrive the data which is in XML document in an XMLColumn in a Table(or an XMLTable which has the XML Document). This XML Document has to be Queried with XQuery as a Relational data(not an XML Document).
    If any body has some ideas please share it across ASAP.
    please share an example for this because i am new to this XQuery.
    Thanks in Expectation,
    Selva.

    Got it working now. I used the 'extract' function in my select statement, but had to add the .getStringValue() fuction. The extract function, just by itself, returns an XMLDocument type. The call for the column in the SQL statement looked like this.
    extract(XML_CONTENT, '/ROOTOBJECT').getStringVal() xml_content
    Thanks so much for your help. Problem solved!

  • My hard drive failed last year and was replaced under a scheme at an Apple Genius bar. I couldn't access my data to erase anything and the apple guy said he couldn't guarantee its safety. Since then I've been the victim of fraud twice. Coincidence?

    I was told the hard drives were sent to China for refurbishment and the guy was pretty flippant about my concerns. I couldn't erase anything beacuse I couldn't access anything. I had to sign a waiver. He acted as if I was getting something for nothing and should quit complaining. Now my bank account and credit card have been compromised and it just seems a little too coincidental for my liking. Every single piece of my life was stored on that hard drive including photos of my kids and it makes me sick to think that someone could have accessed it. I think Apple should be more responsible and caring about their customers concerns. I have an ipad, imac and ipod touch and we have have other Apple laptops, phones, ipads and ipods in the family. I'm not sure I would happily buy any more items after such poor customer relations. I would rather have kept the old hard drive but apparently this wasn't an option. Fuming. Can't even remember half the accounts I've opened but will have to settle for changing all banking details. Buyer beware...

    No hate. Not sure where you got that from. Just stating facts as I know them about the people employed at the Apple stores.
    Yes you ARE, were, accusing whoever took in or had access to your old drive as the person or persons that is accessing your data to commit fraud against you. That is exactly what you stated in the title.
    I think you are slightly paranoid.
    TheNic767 wrote:
    Wow so much hate! I never insinuated that the guy was a genius or not, I just mentioned the name of the service. I also was not accusing him or an apple employee of fraud. My concern was that it would be compromised after its arrival in China. I have also been told by friends way more IT literate than me that removal of data from a failed hard drive is relatively simple with the right equipment and skills. Of course I am no genius in these areas myself and don't profess to be but I would expect a rather more sympathetic ear from the staff and possibly an explanation on the unlikelihood/likelihood of my data being stolen. Unlike the character assassination I appear to be getting from you. Thanks for your help. Have a nice day

  • Relative dates in advanced search / snapshot queries

    Hi -
    Is there any way to search with a relative date in PT 5.x? EX: "Find me content published in the last week"
    We have over 700 publications that make use of relative date searches in PT 4.5 WS. I understand that these should be converted to separate snapshot queries in 5.x, but as I look at things I realize there does not seem to be a way to query by relative date - I seem to need fixed, specific dates ("between October 1 and October 7"). We realy heavily on this kind of logic in order to keep our content fresh with no ongoing maintenance.
    Anyone have any suggestions? Are we missing something obvious?
    Thanks,
    Eric

    Hi.
    You can do this:
    1.- Replace the controller class (MAC) for the corresponding structure.
    2.- Redefine the method QUERY; then is possible to change the parameters that the "Search Engine"  (might be the Reporting Framework) uses
    You can find more details in "The cookbook" and this can be found in the marketplace. If you don't have access give me your e-mail and i will send it to you.
    Best Regards.
    Armando Rodriguez.

  • Access a data file within a jar package

    hello, everyone
    I got a problem in locating a data file packaged in jar file.
    For example, in foo.jar there is a data file with the directory path "d1/d2/data.log". In the command line, I want to access that data file by making it as an input to the Main class which will load data.log at runtime. However, it reported exception of "no such file or directory". In the command line, I input:
    java -classpath foo.jar d1.d2.Main d1/d2/data.log
    any suggestion?
    Thanks
    Chinyi

    I don't understand what you're trying to do by including d1/d2/data.log in the commandline? It appears to be extraneous and invalid.
    Assuming:
    that the class you want to execute is Main.class in foo.jar, and that its path in the jar is di/d2, and
    that Main uses d1.d2.data.log when it access this file, it should work.

  • How to  fetch the relational  data from the xml file registered in xdb

    Hi,
    I have to register the xml file into the  xdb repository and i have to fetch the data of the xml file as relational structure  through the select statement .
    i used the below query to register the xml file in xdb.
    DECLARE
    v_return BOOLEAN;
    BEGIN
    v_return := DBMS_XDB.CREATERESOURCE(
    abspath => '/public/demo/xml/db_objects.xml',
    data => BFILENAME('XML_DIR', 'db_objects.xml')
    COMMIT;
    END;
    Now i have to fetch the values in the xml file as relational data .
    whether it is possible ?
    can any one help me.
    Regards,
    suresh.

    When you transform your XMLdata to a xmltype you can do something like this for example:
    select
    extractvalue(value(p),'/XMLRecord/Session_Id') session_id,
    extractvalue(value(p),'/XMLRecord/StatementId') StatementId,
    extractvalue(value(p),'/XMLRecord/EntryId') EntryId
    from
    table(xmlsequence(extract(xmltype('
    <XMLdemo>
    <FormatModifiers><FormatModifier>UTFEncoding</FormatModifier></FormatModifiers>
    <XMLRecord>
    <Session_Id>117715</Session_Id>
    <StatementId>6</StatementId>
    <EntryId>1</EntryId>
    </XMLRecord>
    </XMLdemo>
    '),'/XMLdemo/*'))) p
    where extractvalue(value(p),'/XMLRecord/Session_Id') is not null;
    For this sample I've put a readable XML in plain text and convert it to xmltype so you can run it on your own database.

  • Web Analyzer - ability to access external data sources?

    Documentation on help.sap.com for the 2004s Web Analyzer notes it can render queries, query views, infoproviders, and <b>external data sources</b> for ad-hoc analysis in the standardized reporting template.
    Can anyone clarify exactly what constitutes external data?  To my knowledge, I didn't think it was possible to report on something outside of the BI instance (let's say an Oracle table on another server, for example).
    Thanks in advance for your help.

    Hi Scott,
    in the Web Analyzer you can use external OLAP data sources(ODBO or XMLA) . Relational data sources are not supported in NW2004s.
    You can set this up in the portal system landscape.
    Please have a look on
    http://help.sap.com/saphelp_nw2004s/helpdata/de/8e/020597f9b4486492e69283fab424fa/frameset.htm
    and
    http://help.sap.com/saphelp_nw2004s/helpdata/de/8e/020597f9b4486492e69283fab424fa/frameset.htm
    Depending on the data source different capabilities are supported, so that some options are only available when you are using the BI OLAP Processor and won't be supported when using external data sources.
    Heike

Maybe you are looking for

  • Automatically trigger the event to load data from Planning cube to Standard Cube

    Hello, We have a below set up in our system.. 1. A Planning BEx query using which user makes certain entries and writes back data to the planning cube. 2. An actual reporting cube which gets data from the planning cube above. Now, what we want to do

  • New 2nd Mac Pro boot drive (ssd) how do I know which is booted?

       Both boot drives are Leopard. Also how do I switch between them safely? Is this do-able w/o restarting?  The restart surprised me and I damaged one, maybe more older externals, so I'm gunshy about  moving ahead... Need conceptual and practical hel

  • Can Airport stream 1080p video?

    Hey i have a airport extreme 4th generation and I am trying to stream 1080p mkvs from an attached external hard drive on my mac to my wd tv live but the video keeps stuttering. Just trying to rule out everything I can to fix the problem. Can my airpo

  • SAP disp+work

    Hi Gurus I have installed mySAP ecc 5.0 with oracle 9.2 DB on solaris 10.My Central instance got installed without any error.But my Database installation i got some errors. Fist error was ERROR 2007-06-27 13:14:16 FRF-00007  Unable to open RFC connec

  • Can I redownload previously purchased items?

    We had a wealth of videos from the itunes store for the kids, but lost all this due to an evacuation. iTunes store recognises that I purchased these before and offers to purchase them again. Is there any way to download again without paying. I tried