Search Related Data

Guys,
Is there any way in SharePoint Search where I can search related data in a particular column. Suppose I have a managed property named SectionNumber {string type} and i have values like 2,2a,2aa,3,3a,4,5,6,21,20,22 etc which is crawled and now i want to show
results like 2,2a,2aa to the users if they search for sectionnumber 2 in the search. 
I have tried SectionNumber:2* in my search query but the problem is that query also returns 20 and 22. Can i put some sort of regex in the search query. 
Any workable solution can do as I can do the changes in my search center, central admin or in my custom query etc...
Nitin Gupta
Nitin Gupta SharePoint Consultant

Can you please provide an example of how to mask numbers in search. I guess that is not available. However i tried synonyms but somehow its not working.
I also tried regex in query rules but again not able to make a generic regex that will serve the purpose.
Nitin Gupta SharePoint Consultant

Similar Messages

  • Relative dates in advanced search / snapshot queries

    Hi -
    Is there any way to search with a relative date in PT 5.x? EX: "Find me content published in the last week"
    We have over 700 publications that make use of relative date searches in PT 4.5 WS. I understand that these should be converted to separate snapshot queries in 5.x, but as I look at things I realize there does not seem to be a way to query by relative date - I seem to need fixed, specific dates ("between October 1 and October 7"). We realy heavily on this kind of logic in order to keep our content fresh with no ongoing maintenance.
    Anyone have any suggestions? Are we missing something obvious?
    Thanks,
    Eric

    Hi.
    You can do this:
    1.- Replace the controller class (MAC) for the corresponding structure.
    2.- Redefine the method QUERY; then is possible to change the parameters that the "Search Engine"  (might be the Reporting Framework) uses
    You can find more details in "The cookbook" and this can be found in the marketplace. If you don't have access give me your e-mail and i will send it to you.
    Best Regards.
    Armando Rodriguez.

  • Where to search for a specific Dimention related data

    Hi,
    I guess, hyperion planning store the dimention related data( parent, child, uda,attributes, consolidation operator, data storage etc) in some relational tables of that planning application. Can anybody help me understand where & how those data is stored and what are the table name I should look for a particular dimention related data?
    Actually I need to look into the planning RDBMS table to get the membernames of one particular dimention and search another huge Oracle database to search for those and retrieve the relevant data writing a query. I am using Planning ver9.3.1
    Please revert back for any clarification.
    Regards.

    Hi,
    Take a look at below tables in your application repository schema (db), they are all linked through id fields and they include dimensional infromation.
    HSP_OBJECT
    HSP_OBJECT_TYPE
    HSP_DIMENSION
    HSP_MEMBER
    HSP_ALIAS
    You get detail information from HSP_OBJECT. HSP_OBJECT keeps entire details for entire metadata. Other tables will help you understand the relations, positions etc.
    Cheers,
    Alp

  • The '' secondary data source is not a relational data source, or does not use an OLE DB provider.

    Hi,
    We are using Teradata as data source in SSAS 2008R2, it’s using ‘.Net Providers\.Net Data Provider for Teradata’.
    In DSV, we have two data sources both using Teradata. While processing dimensions, all dimensions coming from secondary data source has error message”
    The '' secondary data source is not a relational data source, or does not use an OLE DB provider.”
    But we are not using OLE DB provider for now, we’ve searched some threads that named queries can be replaced for secondary ds, but since we have a lot of tables, it’s hard to implement using named queries.
    Any inputs would be appreciated. 

    Hi memostone,
    When defining a data source view that contains tables, views, or columns from multiple data sources, the first data source from which you add objects to the data source view is designated as the primary data source (you cannot change the primary data
    source after it is defined). After defining a data source view based on objects from a single data source, you can then add objects from other data sources.
    If an OLAP processing or a data mining query requires data from multiple data sources in a single query, the primary data source must support remote queries using
    OpenRowset. Typically, this will be a SQL Server data source.
    Here are the restrictions:
    The primary data source should be SQL Server and support OpenRowset to the secondary data source.
    Or you design the cube in such a way that rocessing for each object only needs to access data from a single data source
    I recommand you refer to the following article:
    Defining a Data Source View (Analysis Services):
    http://technet.microsoft.com/en-us/library/ms174600.aspx
    Regards,
    Bin Long
    TechNet Community Support

  • How to generate XML from relational data : PL/SQL or Java

    I'm new to Oracle XML and would appreciate some advice. I've been asked to generate XML documents from data stored in relational tables. The XML documents must be validated against a DTD. We will probably want to store the XML in the database.
    I've seen a PL/SQL based approach as follows :
    1.Mimic the structure of the DTD using SQL object types 2.Assign the relational data to the object type using PL/SQL as required
    3.Use the SYS_XMLGEN package to render the required XML documents from the SQL objects
    However, creating the object types seems to be quite time consuming (step 1 above) for anything other than the simplest of XML documents.
    I've also seen that there is the Java based approach, namely :
    1. Use the XML generator to build Java classes based on a DTD.
    2. Use these classes to build the required XML
    On the face of it, the Java based approach seems simpler. However, I'm not that familiar with Java.
    Which is the best way to proceed ? Is the PL/SQL based approach worth pursuing or should I bite the bullet and brush up my Java ?
    Is it possible to use a combination of PL/SQL and Java to populate the dtd generated java classes (step 2 of the Java approach) to reduce my learning curve ?
    Thanks in advance

    To help answer your questions:
    1) Now, in 9iR2, you can use SQL/XML as another choice.
    2) You can also use XSU to generate the XML and use XSLT to transform it to a desired format instead of using object views if possible.
    3) XDK provide Class generator support to populate XML data to Java classes.

  • Any way to disable "Use Relative Dates" everywhere in Finder?

    Is there any way to make Use Relative Dates to NOT be on by default in Finder?
    I just hate the yesterday, today, etc. crap. I want to just see the file dates. Period. I don't want to have to figure out what the date was yesterday to do a file date comparison.
    Yes, you can turn it off but each time I check Use as defaults and open a new folder in finder, the relative dates are back! Argggh!
    Thanks for any help on this.

    Yes its very easy, with finder open, finder bar at top see View, "drop down box" Show View Options, remove the check mark for use relative dates.

  • Standard IDOCS, Programs for posting FI & bank related data.

    Hi,
    (1)Are there any idocs available for posting FI documents, Vendor master?
    (2)Are there any outbound idocs, programs for sending data to banks. EG:positive pay etc?
    Kindly reply to these questions. Correct answer will be awarded points.
    Regards,
    Akshaya.

    Hi,
    There is message type BANK_CREATE for posting the FI related Bank details. Using the change pointers you can trigger the idocs for posting the bank related data.
    Regards,
    Uday

  • PA_CONTRACT_XSLFO: How to invoke a RTF-template with related data template

    Dear Reader,
    actually I want to extend the standard Document Type Layout for a Purchase Agreement Contract with additional data from approved supplier list (ASL).
    Therefor I have created a RTF-template and a data template with the needed sql-statement. For testing I put this in a standalone concurrent programm and it works fine (result was a blue table with all data rows).
    Next step for me was to invoke the RTF-template into the PA_CONTRACT_XSLFO template for extending the Document Type Layout for my Purchase Agreement Contract. So I put the neede invoke-statements
    <xsl:import href="xdo://XXOC.XX_RTF_TEMPLATE.de.00/"/>
    and
    <xsl:call-template name="XX_RTF_TEMPLATE"/>
    into the XSLFO-template. Also I extend the RTF-template with the define template statement
    <?template:XX_RTF_TEMPLATE?>
    So all seems to be fine.
    As result I get the standard document for Purchase Agreement Contract with the additional blue table from RTF-template BUT WITHOUT DATA !
    From my point of view there is no execution of the sql-statement in data template. But I dont know why.
    Do Oracle support a combination of XSLFO-template with data template?
    [XSLFO-template] with related [XSD-data definition]
    calls [RTF-template] with related [data template (with included sql-statement)]
    Thanks for your help.
    Best regards
    Mario.

    How to call a rtf template from another rtf template by passing a value try in main template create hyperlink of url with parameters for another template
    http://bipconsulting.blogspot.ru/2010/02/drill-down-to-detail-or-another-report.html
    When user pull a quote report from siebel this new rtf template should attach to the quote at the end.it'll be only another report
    IMHO you can not attach it to main. it'll be second independent report
    you can try subtemplate but it's not about rtf from rtf by click
    it's about call automatically rtf subtemplate from main rtf based on some conditions
    for example, main template contain some data and if some condition is true then call subtemplate and place it instead of its condition

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Get Rid of Relative Dates in Applications?

    How do I get rid of relative dates in applications such as Mail?
    I cannot find anything in the Mail Preferences to change this.
    Nor can I find anywhere in System Preferences -> Date & Time and International to change this behaviour.
    Settings in View->Show View Options, as previously recommended, do not affect the date format in applications.
    Thanks
    Al Maloney

    Why get rid of them?
    (1) They require extra steps in my brain to calculate the date of the document.
    (2) When I do not know to-day's date, how can I calculate yesterday's date?
    (3) Aesthetically, they are inconsistent with the list of dates.
    (4) I just do not like them.
    Sláinte!
    Al

  • Why can't I set a relative date for a recurring event in calendar on my ipod touch?

    why can't I set a relative date for a recurring event in calendar on my ipod touch?

    Because The Calendar app does not support it. There are apps in the App store that do.

  • Hi I'm running Addressbook and cannot clear previous entry easily when searching my data base of around 5,000 contacts.    I prefer to view in All contacts on a double page spread with details on the right page.  Searching doesn't seem to work correctly i

    Hi I'm running Addressbook and cannot clear previous entry easily when searching my data base of around 5,000 contacts. 
    I prefer to view in All contacts on a double page spread with details on the right page.  Searching doesn't seem to work correctly in this view.
    It's always the second search that is problematic.
    I've tried typing over and all it seems to do is confine the search to the the entries that have come up for the previous search.
    I've tried to use the x to clear the previous entry and then type the next search, same problem.  The only way seems to be to move from "All Contacts" to "Groups".  Then the searched name appears and I can return to All Contacts to see full details.
    Surely three key press' are not the way it's supposed to work?
    FYI
    Processor  2.7 GHz Intel Core i7
    Memory  8 GB 1333 MHz DDR3
    Graphics  Intel HD Graphics 3000 512 MB
    Software  Mac OS X Lion 10.7.3 (11D50d)
    Address book Version 6.1 (1083)
    MacBook Pro, Mac OS X (10.7.1), 8Mb RAM 2.7Ghz i7

    AddressBook experts are here:
    https://discussions.apple.com/community/mac_os/mac_os_x_v10.7_lion#/?tagSet=1386

  • How to retrive relational data from an XMLType column in Oracle 10g R2

    Hi
    I want how to retrive the data which is in XML document in an XMLColumn in a Table(or an XMLTable which has the XML Document). This XML Document has to be Queried with XQuery as a Relational data(not an XML Document).
    If any body has some ideas please share it across ASAP.
    please share an example for this because i am new to this XQuery.
    Thanks in Expectation,
    Selva.

    Got it working now. I used the 'extract' function in my select statement, but had to add the .getStringValue() fuction. The extract function, just by itself, returns an XMLDocument type. The call for the column in the SQL statement looked like this.
    extract(XML_CONTENT, '/ROOTOBJECT').getStringVal() xml_content
    Thanks so much for your help. Problem solved!

  • Settlement of planned deliv. costs not possible if GR-related data entered

    Hi gurus,
            While doing T-code MRRL if i am entering the GR posting date and ticking the good + delivery check box an error message is trigerd 'Settlement of planned deliv. costs not possible if GR-related data entered'.PLZ reply the reson for this error.

    I Have the same problem them I realise  that :
    This message just appear if i am using doc. selection = 4 .
    tks
    Teresinha

  • Need Suggestion to Stage Process Order related data

    Hi All,
    Could anybody help me by providing the solution or suggestion for the problem which I am describing here. The problem is like to stage the process order related data (which are downloaded from SAP ECC for sending to machine data base or confirmed by machine data base which will be sent to SAP ECC) can we use NetWeaver Data Base by creating separate tables or by creating separate schema in NetWeaver DB or we will go for separate DB to stage all transactional data of Process Order.
    Thanks in Advance.
    Chandan

    Hello Chandan,
    1) Yes you can use underlying Netweaver DB either by creating new tables or by creating new schema but as per MII best practices it is Not Recommended because by mistake you might end up affecting the NW and MII configurations
    2) There is no hard and fast rule saying you must stage Data but it is very good if you do it because,
                           a) Your data will be buffered when SAP is down
                           b) Faster processing
                           c) Ability to perform more analysis on data through many drill downs to sub-levels
    3) I would recommend going for a separate DB to stage all your SAP Data (both for sending or receiving data from ECC)
    Hope this helps!!
    Regards,
    Adarsh

Maybe you are looking for

  • Error AC519 while Specifing Intervals and Posting Rules for Depreciation.

    hi, i deleted one company code from system and now tring to execute t-code OAYR. i checked there is no assignment of company code with chart of depriciation. error is like Company code XXXX is not defined Message no. AC519 Here XXXX is deleted compan

  • Nokia E72 button light problem

    guys really i began to hate that phone really it's the million time that i notice a bug or an issue i can't enjoy my phone i bought it on 1/1/2011 and i feel like i wanna get ride of it :s well as for today's issue :   [ hope it's the last one] i was

  • Challenge forwarding from a servlet to a JSP

    Redirecting information from a servlet to a JSP Hello Everyone,      I am processing the contents of a html form using a servlet. After processing the information received and storing it into a javabean I am trying to pass control onto a jsp for disp

  • Sql injection avoiding

    If someone enters sql commands into a text element for address or name, how does cfqueryparam help protect against sql injection ? Would a regular expression or something checking for dangerous key words help at least as much ?

  • PL/SQL Inner Join

    Hello. I'm trying to join 2 tables. The problem is one record in 1st table is in simple latin (f.e. MODIFIED) and the second one in 2nd table includes polish symbols (f.e. MÓDIFIĘD). I want to ignore polish symbols and simplify them into latin coding