DBMD_XMLGEN with large queries

Hello All,
VERSION Oracle 9i.
I have this Large SQL query (involving Partition By) which I am using to get my data from the database. This query, when run separately gives me the results (about 300+ records) successfully. However, Here I am trying to call this within a stored procedure.
I have assigned the SQL query to sql_stmt variable which is a Varchar. When I print the sql_stmt variable, it gets printed fine.
When I try executing this query using DBMS_XMLGEN, it gives me the following error. I am trying to convert it to a XML and output it as a CLOB.
ORA-19206: Invalid value for query or REF CURSOR parameter
ORA-06512: at "SYS.DBMS_XMLGEN", line 83
ORA-06512: at "ADI.GETDATA", line 330
ORA-06512: at line 5
Can you pls help identify what the problem is here.
1) Is there a limit on how big the query can be for DBMS_XMLGEN
2) Does DBMS_XMLGEN support queries which use partition by and also which have inline queries?
3) Is there any other way to approach this.
Thanks for your help.
AD
create or replace PROCEDURE "GETDATA" (
p_XMLReport OUT CLOB
AS
     qryCtx DBMS_XMLGEN.ctxHandle;
     v_ReportFields VARCHAR2(32767);
     v_Inlinequery VARCHAR2(32767);
     v_XML XMLTYPE;
     record XMLTYPE;
     result CLOB;
     sql_stmt VARCHAR2(32767);
     v_WhereClause VARCHAR2(32767);
     v_filterPtr NUMBER;
     v_pair VARCHAR2(1023);
     v_field_name VARCHAR2(511);
     v_field_value VARCHAR2(511);
     v_pair_idx NUMBER;
     v_Count NUMBER;
BEGIN
     DBMS_OUTPUT.ENABLE(1000000);
v_ReportFields := 'SELECT nrtd.NPIProjectID, nrtd.ProjectType';
v_Inlinequery := '--Non relational TestData data elements. Alias Table'||'''nrtd'''
|| chr(10)|| 'FROM (SELECT npip.id AS NPIProjectID, npip.developmentsite AS DevelopmentSite, npip.projectstatus AS ProjectStatus,'
|| chr(10)||'npip.ProjectType AS ProjectType, pt.Creator, ss.Business_Unit, ss.Division, npip.strategy_name AS Strategy_Name,'
|| chr(10)||'ffi.assemblysite, ffi.packagetype, ti.directreleasestrat AS directreleasestrat,'
|| chr(10)||'ti.testsite, ti.PackSite, npip.umbrellaproject_id, npip.keycustomers, npip.projectedreleasedate AS ProjectedReleasedate,'
|| chr(10)||'pt.perfwwmfgnpcoordinator AS perfwwmfgnpcoordinator, pt.perfwwmfgoffshoretesteng AS perfwwmfgoffshoretesteng, ffi.packagedesignator,'
|| chr(10)||'ffi.leadballcount, ffi.bodysize, ffi.finishcode AS finishcode, ffi.pkgnumber, ffi.pkgstatus,'
|| chr(10)||'ffi.pborpbfree AS pborpbfree, ffi.paddlex AS paddlex, ffi.paddley AS paddley, ffi.backgrind AS backgrind,'
|| chr(10)||'ffi.LeadFrameType, ffi.PiecePart AS PiecePart, ffi.assypackoption AS assypackoption,'
|| chr(10)||'ti.testprocess AS testprocess, ti.contactorresreq AS contactorresreq,'
|| chr(10)||'ti.highrfrequired AS highrfrequired, ti.PeakCurrent AS PeakCurrent, vp.AssignedFocusFactory,'
|| chr(10)||'vp.MfgReleaseReqStatus AS MfgReleaseReqStatus, vp.NumActuators AS NumActuators,'
|| chr(10)||'vp.NumBoards AS NumBoards, vp.numcontactors AS numcontactors, qi.IsQualPlanRequired, '
|| chr(10)||'qi.QualNumber, qi.FailedQualProjectdisposition, qi.IdentifiedActionFromQualReview'
|| chr(10)||'FROM NPIProjects npip'
|| chr(10)||'LEFT OUTER JOIN ProjectTeam pt ON pt.NPIProjectId = npip.Id'
|| chr(10)||'LEFT OUTER JOIN sapws_strategies ss ON ss.Strategy = npip.strategy_name'
|| chr(10)||'LEFT OUTER JOIN FormFactorInformation ffi ON ffi.NPIProjectId = npip.Id'
|| chr(10)||'LEFT OUTER JOIN TestInformation ti ON ti.FormFactorInformationId = ffi.Id'
|| chr(10)||'LEFT OUTER JOIN ValidationPlans vp ON vp.FormFactorInformationId = ffi.Id'
|| chr(10)||'LEFT OUTER JOIN QualInformation qi ON qi.FormFactorInformationId = ffi.Id) nrtd,'
|| chr(10)||' -- Relational data elements.'
|| chr(10)||' --TestGrades DataElements. Alias Table'|| '''rtgd'''
|| chr(10)||' (SELECT NPIProjectID, '
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade,'';''),'';'')) ProductGrade,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| SpeedSuffix,'';''),'';'')) SpeedSuffix,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| PartClass,'';''),'';'')) PartClass,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| MultiGrade,'';''),'';'')) MultiGrade,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| EstTotalTestTimeGood,'';''),'';'')) EstTotalTestTimeGood,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| EstTotalTestTimeBad,'';''),'';'')) EstTotalTestTimeBad,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| ActTotalTestTimeGood,'';''),'';'')) ActTotalTestTimeGood,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| ActTotalTestTimeBad,'';''),'';'')) ActTotalTestTimeBad,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| EstOverAllYield,'';''),'';'')) EstOverAllYield,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| EstYieldToGrade,'';''),'';'')) EstYieldToGrade,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| ActualOverAllYield,'';''),'';'')) ActOverAllYield,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| ActualYieldToGrade,'';''),'';'')) ActYieldToGrade '
|| chr(10)||' FROM (SELECT npip.id AS NPIProjectID, tg.ProductGrade AS ProductGrade,'
|| chr(10)||' tg.SpeedSuffix AS SpeedSuffix, tg.PartClass AS PartClass, tg.MultiGrade AS MultiGrade,'
|| chr(10)||' tg.EstTotalTestTimeGood AS EstTotalTestTimeGood, tg.EstTotalTestTimeBad AS EstTotalTestTimeBad, '
|| chr(10)||' tg.ActTotalTestTimeGood AS ActTotalTestTimeGood, tg.ActTotalTestTimeBad AS ActTotalTestTimeBad,'
|| chr(10)||' tg.EstOverAllYield AS EstOverAllYield, tg.EstYieldToGrade AS EstYieldToGrade, tg.ActualOverAllYield AS ActualOverAllYield,'
|| chr(10)||' tg.ActualYieldToGrade AS ActualYieldToGrade,'
|| chr(10)||' row_number() OVER (PARTITION BY npip.id ORDER BY npip.id) rnum,'
|| chr(10)||' COUNT(*) OVER (PARTITION BY npip.id) tot'
|| chr(10)||' FROM NPIProjects npip LEFT OUTER JOIN FormFactorInformation ffi ON ffi.NPIProjectId = npip.Id '
|| chr(10)||' LEFT OUTER JOIN TestGrades tg ON tg.FormFactorInformationId = ffi.Id)'
|| chr(10)||' WHERE rnum=tot START WITH rnum =1 CONNECT BY PRIOR rnum = rnum-1 AND PRIOR NPIProjectID = NPIProjectID) rtgd,'
|| chr(10)||' -- NewModels DataElements. Alias Table'|| '''rnmd'''
|| chr(10)||' (SELECT NPIProjectID, '
|| chr(10)||' (LTRIM(sys_connect_by_path(ActualFGMaterialNum,'';''),'';'')) ActualFGMaterialNum,'
|| chr(10)||' (LTRIM(sys_connect_by_path(PackOption,'';''),'';'')) PackOption,'
|| chr(10)||' (LTRIM(sys_connect_by_path(IsReleasedMaterial,'';''),'';'')) IsReleasedMaterial'
|| chr(10)||' FROM (SELECT npip.id AS NPIProjectID,'
|| chr(10)||' nm.ActualFGMaterialNum AS ActualFGMaterialNum, '
|| chr(10)||' nm.PackOption AS PackOption, nm.IsReleasedMaterial AS IsReleasedMaterial,'
|| chr(10)||' row_number() OVER (PARTITION BY npip.id ORDER BY npip.id) rnum,'
|| chr(10)||' COUNT(*) OVER (PARTITION BY npip.id) tot'
|| chr(10)||' FROM NPIProjects npip LEFT OUTER JOIN FormFactorInformation ffi ON ffi.NPIProjectId = npip.Id '
|| chr(10)||' LEFT OUTER JOIN TestGrades tg ON tg.FormFactorInformationId = ffi.Id'
|| chr(10)||' LEFT OUTER JOIN NewModels nm ON nm.TestGradesID = tg.Id)'
|| chr(10)||' WHERE rnum=tot START WITH rnum =1 CONNECT BY PRIOR rnum = rnum-1 AND PRIOR NPIProjectID = NPIProjectID) rnmd,'
|| chr(10)||' --SAPGenericID from NewModels Table for FF, TD, BD, PD and PromisWaferIdentifier from WaferInformation Table for New Wafer and New Die'
|| chr(10)||' (SELECT npip.id AS NPIProjectID, decode(npip.projecttype, ''New Wafer'', wi.PromisWaferIdentifier, ''New Die'', wi.PromisWaferIdentifier, nm.SAPGenericID)SAPGenericID --nm.SAPGenericID'
|| chr(10)||' FROM NPIProjects npip LEFT OUTER JOIN FormFactorInformation ffi ON ffi.NPIProjectId = npip.Id'
|| chr(10)||' LEFT OUTER JOIN WaferInformation wi ON wi.NPIProjectId = npip.Id'
|| chr(10)||' LEFT OUTER JOIN (Select FormFactorInformationID, min(ID)ID from TestGrades GROUP BY FormFactorInformationID) tg ON (tg.FormFactorInformationId = ffi.Id)'
|| chr(10)||' LEFT OUTER JOIN (Select TestGradesID, min(SAPGenericID)SAPGenericID from NewModels GROUP BY TestGradesID) nm ON (nm.TestGradesID = tg.ID)) nmwigen,'
|| chr(10)||' -- Relational TestPasses data. Alias Table' || '''rtpd'''
|| chr(10)||' (SELECT NPIProjectID, '
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| Contactor,'';''),'';'')) Contactor,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| Handler,'';''),'';'')) Handler,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| TesterConfig,'';''),'';'')) TesterConfig,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| NumTestSites,'';''),'';'')) NumTestSites,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| TestTemp,'';''),'';'')) TestTemp,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| SpecialTest,'';''),'';'')) SpecialTest,'
|| chr(10)||' --(LTRIM(sys_connect_by_path(ProductGrade || ''-''|| SpecialTestComments,'';''),'';'')) SpecialTestCommByPassPerGrade,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| EstTestTimeGood,'';''),'';'')) EstTestTimeGood,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| EstTestTimeBad,'';''),'';'')) EstTestTimeBad,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| ActTestTimeGood,'';''),'';'')) ActTestTimeGood,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProductGrade || ''-''|| ActTestTimeBad,'';''),'';'')) ActTestTimeBad'
|| chr(10)||' FROM (SELECT npip.id AS NPIProjectID, tg.ProductGrade AS ProductGrade, tp.contactor AS Contactor, '
|| chr(10)||' tp.Handler AS Handler, tp.TesterConfig AS TesterConfig, tp.NumTestSites AS NumTestSites, '
|| chr(10)||' tp.testtemp AS TestTemp, tp.SpecialTest AS SpecialTest, --tp.SpecialtestComment AS SpecialTestComments,'
|| chr(10)||' tp.EstTestTimeGood AS EstTestTimeGood, tp.EstTestTimeBad AS EstTestTimeBad, '
|| chr(10)||' tp.ActTestTimeGood AS ActTestTimeGood, tp.ActTestTimeBad AS ActTestTimeBad,'
|| chr(10)||' row_number() OVER (PARTITION BY npip.id ORDER BY npip.id) rnum,'
|| chr(10)||' COUNT(*) OVER (PARTITION BY npip.id) tot'
|| chr(10)||' FROM NPIProjects npip LEFT OUTER JOIN FormFactorInformation ffi ON ffi.NPIProjectId = npip.Id '
|| chr(10)||' LEFT OUTER JOIN TestGrades tg ON tg.FormFactorInformationId = ffi.Id'
|| chr(10)||' LEFT OUTER JOIN TestPasses tp ON tp.TestGradesID = tg.Id)'
|| chr(10)||' WHERE rnum=tot START WITH rnum =1 CONNECT BY PRIOR rnum = rnum-1 AND PRIOR NPIProjectID = NPIProjectID)rtpd,'
|| chr(10)||' -- Relational WaferInformation data Alias Table'||'''rwid'''
|| chr(10)||' (SELECT NPIProjectID,'
|| chr(10)||' (LTRIM(sys_connect_by_path(FabSite,'';''),'';'')) FabSite,'
|| chr(10)||' (LTRIM(sys_connect_by_path(WaferDesignType,'';''),'';'')) WaferDesignType,'
|| chr(10)||' (LTRIM(sys_connect_by_path(WaferSize,'';''),'';'')) WaferSize,'
|| chr(10)||' (LTRIM(sys_connect_by_path(FabProcessCode,'';''),'';'')) FabProcessCode,'
|| chr(10)||' (LTRIM(sys_connect_by_path(FabProcessStatus,'';''),'';'')) FabProcessStatus,'
|| chr(10)||' (LTRIM(sys_connect_by_path(FndryFabPlantCode,'';''),'';'')) FndryFabPlantCode,'
|| chr(10)||' (LTRIM(sys_connect_by_path(EstTapeOutDate,'';''),'';'')) EstTapeOutDate,'
|| chr(10)||' (LTRIM(sys_connect_by_path(EstXWidth,'';''),'';'')) EstXWidth,'
|| chr(10)||' (LTRIM(sys_connect_by_path(EstGDPW,'';''),'';'')) EstGDPW,'
|| chr(10)||' (LTRIM(sys_connect_by_path(EstYLength,'';''),'';'')) EstYLength,'
|| chr(10)||' (LTRIM(sys_connect_by_path(PowerDissipation,'';''),'';'')) PowerDissipation,'
|| chr(10)||' (LTRIM(sys_connect_by_path(XWidth,'';''),'';'')) XWidth,'
|| chr(10)||' (LTRIM(sys_connect_by_path(YLength,'';''),'';'')) YLength,'
|| chr(10)||' (LTRIM(sys_connect_by_path(OMSVendorName,'';''),'';'')) OMSVendorName,'
|| chr(10)||' (LTRIM(sys_connect_by_path(GDPW,'';''),'';'')) GDPW,'
|| chr(10)||' (LTRIM(sys_connect_by_path(IsReleaseOnPizzaMask,'';''),'';'')) IsReleaseOnPizzaMask,'
|| chr(10)||' (LTRIM(sys_connect_by_path(FabMfgReleaseReqStatus,'';''),'';'')) FabMfgReleaseReqStatus,'
|| chr(10)||' (LTRIM(sys_connect_by_path(FabMfgReleaseReqComments,'';''),'';'')) FabMfgReleaseReqComments'
|| chr(10)||' FROM (SELECT npip.id AS NPIProjectID, decode(npip.ProjectType, ''New Wafer'', wi.FabSite, ''New Die'', wi.FabSite, di.FabSite) AS FabSite, wi.WaferDesignType AS WaferDesignType,'
|| chr(10)||' wi.WaferSize AS WaferSize, wi.FabProcessCode AS FabProcessCode, wi.FabProcessStatus AS FabProcessStatus, '
|| chr(10)||' wi.FndryFabPlantCode AS FndryFabPlantCode, wi.EstTapeOutDate AS EstTapeOutDate, wi.EstXWidth AS EstXWidth, '
|| chr(10)||' wi.ESTGDPW AS EstGDPW, wi.EstYLength AS EstYLength, wi.PowerDissipation AS PowerDissipation, wi.XWidth AS XWidth, wi.YLength AS YLength,'
|| chr(10)||' wi.OMSVendorName AS OMSVendorName, wi.GDPW AS GDPW, wi.IsReleaseOnPizzaMask AS IsReleaseOnPizzaMask, '
|| chr(10)||' wi.FabMfgReleaseReqStatus AS FabMfgReleaseReqStatus, wi.FabMfgReleaseReqComments AS FabMfgReleaseReqComments, '
|| chr(10)||' row_number() OVER (PARTITION BY npip.id ORDER BY npip.id) rnum,'
|| chr(10)||' COUNT(*) OVER (PARTITION BY npip.id) tot'
|| chr(10)||' FROM NPIProjects npip LEFT OUTER JOIN DieInformation di ON di.NPIProjectId = npip.Id '
|| chr(10)||' LEFT OUTER JOIN TrimProbeInformation tpi ON tpi.Id = di.TrimProbeInformation_id'
|| chr(10)||' LEFT OUTER JOIN WaferInformation wi ON wi.WID = tpi.WaferInformation_id)'
|| chr(10)||' WHERE rnum=tot START WITH rnum =1 CONNECT BY PRIOR rnum = rnum-1 AND PRIOR NPIProjectID = NPIProjectID)rwid,'
|| chr(10)||' --Relational TrimProbeInformation Dataelements.Alias Table'||'''rtpid'''
|| chr(10)||' (SELECT NPIProjectID,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProbeSite,'';''),'';'')) ProbeSite,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProbeType,'';''),'';'')) ProbeType,'
|| chr(10)||' (LTRIM(sys_connect_by_path(DieDesignType,'';''),'';'')) DieDesignType,'
|| chr(10)||' (LTRIM(sys_connect_by_path(probemfgreleasereqstatus,'';''),'';'')) probemfgreleasereqstatus,'
|| chr(10)||' (LTRIM(sys_connect_by_path(probemfgreleasereqcomments,'';''),'';'')) probemfgreleasereqcomments,'
|| chr(10)||' (LTRIM(sys_connect_by_path(isvendorpartonwfrordsht,'';''),'';'')) isvendorpartonwfrordsht '
|| chr(10)||' FROM (SELECT npip.id AS NPIProjectID,tpi.ProbeSite AS ProbeSite, tpi.ProbeType AS ProbeType, '
|| chr(10)||' tpi.DieDesignType AS DieDesignType, tpi.probemfgreleasereqstatus AS probemfgreleasereqstatus, '
|| chr(10)||' tpi.probemfgreleasereqcomments AS probemfgreleasereqcomments, '
|| chr(10)||' tpi.isvendorpartonwfrordsht AS isvendorpartonwfrordsht,'
|| chr(10)||' row_number() OVER (PARTITION BY npip.id ORDER BY npip.id) rnum,'
|| chr(10)||' COUNT(*) OVER (PARTITION BY npip.id) tot'
|| chr(10)||' FROM NPIProjects npip LEFT OUTER JOIN WaferInformation wi ON wi.NPIProjectId = npip.Id'
|| chr(10)||' LEFT OUTER JOIN TrimProbeInformation tpi ON tpi.waferinformation_id = wi.wid) '
|| chr(10)||' WHERE rnum=tot START WITH rnum =1 CONNECT BY PRIOR rnum = rnum-1 AND PRIOR NPIProjectID = NPIProjectID)rtpid, '
|| chr(10)||' -- Relational TrimProbeGrades Dataelements.'||' Alias Table ''rtpgd'' '
|| chr(10)||' (SELECT NPIProjectID,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProbeGrade,'';''),'';'')) ProbeGrade, '
|| chr(10)||' (LTRIM(sys_connect_by_path(ProbeGrade || ''-''|| EstProbeYield1,'';''),'';'')) EstProbeYield1,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProbeGrade || ''-''|| EstTotalProbeTimeGood,'';''),'';'')) EstTotalProbeTimeGood,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProbeGrade || ''-''|| EstTotalProbeTimeBad,'';''),'';'')) EstTotalProbeTimeBad,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProbeGrade || ''-''|| ActProbeYield,'';''),'';'')) ActProbeYield,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProbeGrade || ''-''|| ActTotalProbeTimeGood,'';''),'';'')) ActTotalProbeTimeGood,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProbeGrade || ''-''|| ActTotalProbeTimeBad,'';''),'';'')) ActTotalProbeTimeBad'
|| chr(10)||' FROM (SELECT npip.id AS NPIProjectID, tpg.ProbeGrade AS ProbeGrade, decode(tpg.EstProbeYield, null, tpg.EstProbeYield, tpg.EstProbeYield1) AS EstProbeYield1,'
|| chr(10)||' tpg.EstTotalProbeTimeGood AS EstTotalProbeTimeGood, tpg.EstTotalProbeTimeBad AS EstTotalProbeTimeBad,'
|| chr(10)||' tpg.ActProbeYield AS ActProbeYield, tpg.ActTotalProbeTimeGood AS ActTotalProbeTimeGood, '
|| chr(10)||' tpg.ActTotalProbeTimeBad AS ActTotalProbeTimeBad,'
|| chr(10)||' row_number() OVER (PARTITION BY npip.id ORDER BY npip.id) rnum,'
|| chr(10)||' COUNT(*) OVER (PARTITION BY npip.id) tot'
|| chr(10)||' FROM NPIProjects npip LEFT OUTER JOIN DieInformation di ON di.NPIProjectid = npip.id'
|| chr(10)||' LEFT OUTER JOIN TrimProbeInformation tpi ON tpi.Id = di.TrimProbeInformation_id'
|| chr(10)||' LEFT OUTER JOIN TrimProbeGrades tpg ON tpg.TrimProbeInformation_Id = tpi.Id)'
|| chr(10)||' WHERE rnum=tot START WITH rnum =1 CONNECT BY PRIOR rnum = rnum-1 AND PRIOR NPIProjectID = NPIProjectID)rtpgd,'
|| chr(10)||' -- Relational TrimProbePasses data.Alias Table' ||'''rtppd'''
|| chr(10)||' (SELECT NPIProjectID, '
|| chr(10)||' (LTRIM(sys_connect_by_path(ProbeGrade || ''-''|| TrimProbeSystem,'';''),'';'')) TrimProbeSystem,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProbeGrade || ''-''|| NumProbeSites,'';''),'';'')) NumProbeSites,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProbeGrade || ''-''|| ProbeTempReqd1,'';''),'';'')) ProbeTempReqd1,'
|| chr(10)||' (LTRIM(sys_connect_by_path(ProbeGrade || ''-''|| ProbeTemp,'';''),'';'')) ProbeTemp'
|| chr(10)||' FROM (SELECT npip.id AS NPIProjectID, tpg.ProbeGrade AS ProbeGrade,'
|| chr(10)||' tpp.TrimProbeSystem AS TrimProbeSystem, tpp.NumProbeSites AS NumProbeSites, '
|| chr(10)||' decode(tpp.ProbeTempReqd, null, tpp.ProbeTempReqd1,tpp.ProbeTempReqd1) AS ProbeTempReqd1, tpp.ProbeTemp AS ProbeTemp, '
|| chr(10)||' row_number() OVER (PARTITION BY npip.id ORDER BY npip.id) rnum,'
|| chr(10)||' COUNT(*) OVER (PARTITION BY npip.id) tot'
|| chr(10)||' FROM NPIProjects npip LEFT OUTER JOIN DieInformation di ON di.NPIProjectid = npip.id'
|| chr(10)||' LEFT OUTER JOIN TrimProbeInformation tpi ON tpi.Id = di.TrimProbeInformation_id'
|| chr(10)||' LEFT OUTER JOIN TrimProbeGrades tpg ON tpg.TrimProbeInformation_Id = tpi.Id'
|| chr(10)||' LEFT OUTER JOIN TrimProbePasses tpp ON tpp.TrimProbeGrades_Id = tpg.Id)'
|| chr(10)||' WHERE rnum=tot START WITH rnum =1 CONNECT BY PRIOR rnum = rnum-1 AND PRIOR NPIProjectID = NPIProjectID)rtppd'
|| chr(10)||' WHERE nrtd.NPIProjectID = rtpid.NPIProjectID '
|| chr(10)||' AND nrtd.NPIProjectID = rtgd.NPIProjectID'
|| chr(10)||' AND nrtd.NPIProjectID = rtpd.NPIProjectID'
|| chr(10)||' AND nrtd.NPIProjectID = rnmd.NPIProjectID'
|| chr(10)||' AND nrtd.NPIProjectID = nmwigen.NPIProjectID'
|| chr(10)||' AND nrtd.NPIProjectID = rwid.NPIProjectID'
|| chr(10)||' AND nrtd.NPIProjectID = rtpgd.NPIProjectID'
|| chr(10)||' AND nrtd.NPIProjectID = rtppd.NPIProjectID;';
sql_stmt := v_ReportFields || chr(10) || v_Inlinequery;
--print_out(sql_stmt);
qryCtx := DBMS_XMLGEN.newContext (sql_stmt);
DBMS_XMLGEN.SetNullHandling(qryCtx, 2);
p_XMLReport := DBMS_XMLGEN.getXML(qryCtx);
--print_out(p_XMLReport);
EXCEPTION
WHEN OTHERS THEN
RAISE_APPLICATION_ERROR( -20001, 'An error was encountered - ' || substr(SQLCODE,1,200) || ' -ERROR- ' || SQLERRM, TRUE );
END GETDATA;
---------------------------------------------------------------------------------------------------------------------------------

You can do this
v_Inlinequery := '--Non relational TestData data elements. Alias Table'||'''nrtd'
FROM (SELECT npip.id AS NPIProjectID, npip.developmentsite AS DevelopmentSite, npip.projectstatus AS ProjectStatus,
npip.ProjectType AS ProjectType, pt.Creator, ss.Business_Unit, ss.Division, npip.strategy_name AS Strategy_Name,
ffi.assemblysite, ffi.packagetype, ti.directreleasestrat AS directreleasestrat,
ti.testsite, ti.PackSite, npip.umbrellaproject_id, npip.keycustomers, npip.projectedreleasedate AS ProjectedReleasedate,
....You don't have to do chr(10) for every line.
SS

Similar Messages

  • Running out of memory while using cursored stream with large data

    We are following the suggestions/recommendations for the cursored stream:
    CursoredStream cursor = null;
              try
                   Session session = getTransaction();
                   int batchSize = 50;
                   ReadAllQuery raq = getQuery();
                   raq.useCursoredStream(batchSize, batchSize);
                   int num = 0;
                   ArrayList<Request> limitRequests = null;
                   int totalLimitRequest = 0;
                   cursor = (CursoredStream) session.executeQuery(raq);
                   while( !cursor.atEnd() )
                        Request request = (Request) cursor.read() ;
                        if( num == 0 )
                             limitRequests = new ArrayList<Request>(batchSize);
                        limitRequests.add(request);
                        totalLimitRequest++;
                        num++;
                        if( num >= batchSize )
                             log.warn("Migrating batch of " + batchSize + " Requests.");
                             updateLimitRequestFillPriceForBatch(limitRequests);
                             num = 0;
                             cursor.releasePrevious();
                   if( num > 0 )
                        updateLimitRequestFillPriceForBatch(limitRequests);
                   cursor.close();
    We are committing every 50 records in the unit of work, if we set DontMaintianCache on the ReadAllQuery we are getting PrimaryKeyExceptions intermittently, and we do not see much difference in the IdentityMap size.
    Any suggestions/ideas for dealing with large data sets? Thanks

    Hi,
    If I use read-only classes with CursoredStream and execute the query within UOW, should I be saving any memory?
    I had to use UOW because when I use Session to execute the query I get
    6115: ISOLATED_QUERY_EXECUTED_ON_SERVER_SESSION
    Cause: An isolated query was executed on a server session: queries on isolated classes, or queries set to use exclusive connections, must not be executed on a ServerSession or in CMP outside of a transaction.
    I assume marking the descriptor as read-only will avoid registering in UOW, but I want to make sure that this is the case while using CursoredStream.
    We are running in OC4J(OAS10.1.3.4) with BeanManagedTransaction.
    Please suggest.
    Thanks
    -Raam
    Edited by: Raam on Apr 2, 2009 1:45 PM

  • ADF how to display a processing page when executing large queries

    ADF how to display a processing page when executing large queries
    The ADF application that I have written currently has the following structure:
    DataPage (search.jsp) that contains a form that the user enters their search criteria --> forward action (doSearch) --> DataAction (validate) that validates the inputted values --> forward action (success) --> DataAction (performSearch) that has a refresh method dragged on it, and an action that manually sets the itterator for the collection to -1 --> forward action (success) --> DataPage (results.jsp) that displays the results of the then (hopefully) populated collection.
    I am not using a database, I am using a java collection to hold the data and the refresh method executes a query against an Autonomy Server that retrieves results in XML format.
    The problem that I am experiencing is that sometimes a user may submit a query that is very large and this creates problems because the browser times out whilst waiting for results to be displayed, and as a result a JBO-29000 null pointer error is displayed.
    I have previously got round this using Java Servlets where by when a processing servlet is called, it automatically redirects the browser to a processing page with an animation on it so that the user knows something is being processed. The processing page then recalls the servlet every 3seconds to see if the processing has been completed and if it has the forward to the appropriate results page.
    Unfortunately I can not stop users entering large queries as the system requires users to be able to search in excess of 5 million documents on a regular basis.
    I'd appreciate any help/suggestions that you may have regarding this matter as soon as possible so I can make the necessary amendments to the application prior to its pilot in a few weeks time.

    Hi Steve,
    After a few attempts - yes I have a hit a few snags.
    I'll send you a copy of the example application that I am working on but this is what I have done so far.
    I've taken a standard application that populates a simple java collection (not database driven) with the following structure:
    DataPage --> DataAction (refresh Collection) -->DataPage
    I have then added this code to the (refreshCollectionAction) DataAction
    protected void invokeCustomMethod(DataActionContext ctx)
    super.invokeCustomMethod(ctx);
    HttpSession session = ctx.getHttpServletRequest().getSession();
    Thread nominalSearch = (Thread)session.getAttribute("nominalSearch") ;
    if (nominalSearch == null)
    synchronized(this)
    //create new instance of the thread
    nominalSearch = new ns(ctx);
    } //end of sychronized wrapper
    session.setAttribute("nominalSearch", nominalSearch);
    session.setAttribute("action", "nominalSearch");
    nominalSearch.start();
    System.err.println("started thread calling loading page");
    ctx.setActionForward("loading.jsp");
    else
    if (nominalSearch.isAlive())
    System.err.println("trying to call loading page");
    ctx.setActionForward("loading.jsp");
    else
    System.err.println("trying to call results page");
    ctx.setActionForward("success");
    Created another class called ns.java:
    package view;
    import oracle.adf.controller.struts.actions.DataActionContext;
    import oracle.adf.model.binding.DCIteratorBinding;
    import oracle.adf.model.generic.DCRowSetIteratorImpl;
    public class ns extends Thread
    private DataActionContext ctx;
    public ns(DataActionContext ctx)
    this.ctx = ctx;
    public void run()
    System.err.println("START");
    DCIteratorBinding b = ctx.getBindingContainer().findIteratorBinding("currentNominalCollectionIterator");
    ((DCRowSetIteratorImpl)b.getRowSetIterator()).rebuildIteratorUpto(-1);
    //b.executeQuery();
    System.err.println("END");
    and added a loading.jsp page that calls a new dataAction called processing every second. The processing dataAction has the following code within it:
    package view;
    import javax.servlet.http.HttpSession;
    import oracle.adf.controller.struts.actions.DataForwardAction;
    import oracle.adf.controller.struts.actions.DataActionContext;
    public class ProcessingAction extends DataForwardAction
    protected void invokeCustomMethod(DataActionContext actionContext)
    // TODO: Override this oracle.adf.controller.struts.actions.DataAction method
    super.invokeCustomMethod(actionContext);
    HttpSession session = actionContext.getHttpServletRequest().getSession();
    String action = (String)session.getAttribute("action");
    if (action.equalsIgnoreCase("nominalSearch"))
    actionContext.setActionForward("refreshCollection.do");
    I'd appreciate any help or guidance that you may have on this as I really need to implement a generic loading page that can be called by a number of actions within my application as soon as possible.
    Thanks in advance for your help
    David.

  • Is anyone working with large datasets ( 200M) in LabVIEW?

    I am working with external Bioinformatics databasesa and find the datasets to be quite large (2 files easily come out at 50M or more). Is anyone working with large datasets like these? What is your experience with performance?

    Colby, it all depends on how much memory you have in your system. You could be okay doing all that with 1GB of memory, but you still have to take care to not make copies of your data in your program. That said, I would not be surprised if your code could be written so that it would work on a machine with much less ram by using efficient algorithms. I am not a statistician, but I know that the averages & standard deviations can be calculated using a few bytes (even on arbitrary length data sets). Can't the ANOVA be performed using the standard deviations and means (and other information like the degrees of freedom, etc.)? Potentially, you could calculate all the various bits that are necessary and do the F-test with that information, and not need to ever have the entire data set in memory at one time. The tricky part for your application may be getting the desired data at the necessary times from all those different sources. I am usually working with files on disk where I grab x samples at a time, perform the statistics, dump the samples and get the next set, repeat as necessary. I can calculate the average of an arbitrary length data set easily by only loading one sample at a time from disk (it's still more efficient to work in small batches because the disk I/O overhead builds up).
    Let me use the calculation of the mean as an example (hopefully the notation makes sense): see the jpg. What this means in plain english is that the mean can be calculated solely as a function of the current data point, the previous mean, and the sample number. For instance, given the data set [1 2 3 4 5], sum it, and divide by 5, you get 3. Or take it a point at a time: the average of [1]=1, [2+1*1]/2=1.5, [3+1.5*2]/3=2, [4+2*3]/4=2.5, [5+2.5*4]/5=3. This second method required far more multiplications and divisions, but it only ever required remembering the previous mean and the sample number, in addition to the new data point. Using this technique, I can find the average of gigs of data without ever needing more than three doubles and an int32 in memory. A similar derivation can be done for the variance, but it's easier to look it up (I can provide it if you have trouble finding it). Also, I think this funtionality is built into the LabVIEW pt by pt statistics functions.
    I think you can probably get the data you need from those db's through some carefully crafted queries, but it's hard to say more without knowing a lot more about your application.
    Hope this helps!
    Chris
    Attachments:
    Mean Derivation.JPG ‏20 KB

  • Poor Performance 7.0 Workbook with multiple queries

    Hi all,
    I've got the following issue. Our users have workbooks which contain serveral queries on different sheets. (One on each workbook sheet)
    When a workbook contains 5 or more queries, the overall workbook performance will decrease drastically. Workbook with 8 or more queries result in time-outs and messages like unsufficient shared memory.
    Does anyone have the same kind of problems with multiple queries in a Bex 7.0 workbook? Do any one have a solution?
    Any suggestions are welcome.
    Thanks!
    Cheers,
    Ron

    Bill,
    Tried to make a workbook according to your advise. It certainly makes a difference in workbook size. The new workbook is 50% smaller than the older version based on the standard SAP template.
    However, my workbook contains 17 queries, and after 5 minutes it breaks the BW connection. So basic conclusion is that BW 7.0 workbook can't work with a large number of dataproviders/ queries.  This did work in BEx BW 3.x.
    If any one has any other suggestion, more than welcome.
    Cheers,
    Ron

  • Coding Swing Applications with Large JDBC ResultSets

    Hi,
    Does anyone know where I can find information regarding efficient ways to code Swing UI's with large JDBC result sets. I have been using JTextArea but my application seems to constantly run out of memory when I encounter a large result set. Also, any information regarding threading and if running database queries should be done entirely off the event thread vs SwingUtilities.invokeLater() would be of tremendous help.
    Thanks,
    John Meah

    If doing operations in the awt thread starts to impair the performance of your app, then create a new class that implements Runnable, and do this:
    Thread t = new Thread (new yourRunnable());
    t.start();
    t should have lower priority than the AWT thread.
    As far as large result sets, it is a fairly universal problem. My recommendation is that you arbitrarily impose a limit on how many of the records returned are displayed, like maybe 25 records. This should fix your problem

  • ArrayIndexOutOfBoundsException with large XML

    Hello,
    I have some java code that queries the DB and displays the XML in a browser. I am using the oracle.jbo.ViewObject object with the writeXML() method. Everything works well until I try to process a large XML file, the I get the following error:
    java.lang.ArrayIndexOutOfBoundsException: 16388
         at oracle.xml.io.XMLObjectOutput.writeUTF(XMLObjectOutput.java:215)
         at oracle.xml.parser.v2.XMLText.writeExternal(XMLText.java:354)
         at oracle.xml.parser.v2.XMLElement.writeExternal(XMLElement.java:1459)
         at oracle.xml.parser.v2.XMLElement.writeExternal(XMLElement.java:1459)
         at oracle.xml.parser.v2.XMLElement.writeExternal(XMLElement.java:1459)
         at oracle.xml.parser.v2.XMLElement.writeExternal(XMLElement.java:1459)
         at oracle.xml.parser.v2.XMLElement.writeExternal(XMLElement.java:1414)
         at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1267)
         at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1245)
         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1052)
         at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:278)
         at com.evermind.server.ejb.EJBUtils.cloneSerialize(EJBUtils.java:409)
         at com.evermind.server.ejb.EJBUtils.cloneObject(EJBUtils.java:396)
    etc...
    I can put in the query to only allow a specific size to be displayed, but the users need to be able to access the larger XML files also. Has anyone else run into this issue?
    Oracle 10g
    Any help or pointers are greatly appreciated.
    Thank you.
    S

    No. We are not limiting the size in our code. Here is a snip of the offending code. The exception occurs on the " results = batchInterfaceView.writeXML(0, 0);" line, but only with larger files.
    <pre>
    try {
    // Request and response helper classes
    XMLHelper request = new XMLHelper(inputXML);
    response = new ResponseXMLHelper();
    if (request.doesValueExist(APP_ERROR_ID)) {
    //get input parameter
    strAppErrorId = request.getValue(APP_ERROR_ID);
    appErrorId = NumberConverter.toBigDecimal(strAppErrorId);
    //get Pos location view
    ViewObject batchInterfaceView =
    findViewObject(GET_ERROR_VIEW, PACKAGE_NAME);
    // get data for selected BatchInterface
    batchInterfaceView.setWhereClauseParam(0, appErrorId);
    batchInterfaceView.executeQuery();
    results = batchInterfaceView.writeXML(0, 0);
    response.addView(results);
    } catch (JboException e) {
    </pre>
    Thank you again for any help.
    S

  • Wpg_docload fails with "large" files

    Hi people,
    I have an application that allows the user to query and download files stored in an external application server that exposes its functionality via webservices. There's a lot of overhead involved:
    1. The user queries the file from the application and gets a link that allows her to download the file. She clicks on it.
    2. Oracle submits a request to the webservice and gets a XML response back. One of the elements of the XML response is an embedded XML document itself, and one of its elements is the file, encoded in base64.
    3. The embedded XML document is extracted from the response, and the contents of the file are stored into a CLOB.
    4. The CLOB is converted into a BLOB.
    5. The BLOB is pushed to the client.
    Problem is, it only works with "small" files, less than 50 KB. With "large" files (more than 50 KB), the user clicks on the download link and about one second later, gets a
    The requested URL /apex/SCHEMA.GET_FILE was not found on this serverWhen I run the webservice outside Oracle, it works fine. I suppose it has to do with PGA/SGA tuning.
    It looks a lot like the problem described at this Ask Tom question.
    Here's my slightly modified code (XMLRPC_API is based on Jason Straub's excellent [Flexible Web Service API|http://jastraub.blogspot.com/2008/06/flexible-web-service-api.html]):
    CREATE OR REPLACE PROCEDURE get_file ( p_file_id IN NUMBER )
    IS
        l_url                  VARCHAR2( 255 );
        l_envelope             CLOB;
        l_xml                  XMLTYPE;
        l_xml_cooked           XMLTYPE;
        l_val                  CLOB;
        l_length               NUMBER;
        l_filename             VARCHAR2( 2000 );
        l_filename_with_path   VARCHAR2( 2000 );
        l_file_blob            BLOB;
    BEGIN
        SELECT FILENAME, FILENAME_WITH_PATH
          INTO l_filename, l_filename_with_path
          FROM MY_FILES
         WHERE FILE_ID = p_file_id;
        l_envelope := q'!<?xml version="1.0"?>!';
        l_envelope := l_envelope || '<methodCall>';
        l_envelope := l_envelope || '<methodName>getfile</methodName>';
        l_envelope := l_envelope || '<params>';
        l_envelope := l_envelope || '<param>';
        l_envelope := l_envelope || '<value><string>' || l_filename_with_path || '</string></value>';
        l_envelope := l_envelope || '</param>';
        l_envelope := l_envelope || '</params>';
        l_envelope := l_envelope || '</methodCall>';
        l_url := 'http://127.0.0.1/ws/xmlrpc_server.php';
        -- Download XML response from webservice. The file content is in an embedded XML document encoded in base64
        l_xml := XMLRPC_API.make_request( p_url      => l_url,
                                          p_envelope => l_envelope );
        -- Extract the embedded XML document from the XML response into a CLOB
        l_val := DBMS_XMLGEN.convert( l_xml.extract('/methodResponse/params/param/value/string/text()').getclobval(), 1 );
        -- Make a XML document out of the extracted CLOB
        l_xml := xmltype.createxml( l_val );
        -- Get the actual content of the file from the XML
        l_val := DBMS_XMLGEN.convert( l_xml.extract('/downloadResult/contents/text()').getclobval(), 1 );
        -- Convert from CLOB to BLOB
        l_file_blob := XMLRPC_API.clobbase642blob( l_val );
        -- Figure out how big the file is
        l_length    := DBMS_LOB.getlength( l_file_blob );
        -- Push the file to the client
        owa_util.mime_header( 'application/octet', FALSE );
        htp.p( 'Content-length: ' || l_length );
        htp.p( 'Content-Disposition: attachment;filename="' || l_filename || '"' );
        owa_util.http_header_close;
        wpg_docload.download_file( l_file_blob );
    END get_file;
    /I'm running XE, PGA is 200 MB, SGA is 800 MB. Any ideas?
    Regards,
    Georger

    Script: http://www.indesignsecrets.com/downloads/MultiPageImporter2.5JJB.jsx.zip
    It works great for files upto ~400 pages, when have more pages than that, is when I get the crash at around page 332 .
    Thanks

  • Can ui:table deal with large table?

    I have found h:dataTable can do pagination because it's data source is just a DataModel. But ui:table's datasouce is a data provider which looks some complex and confused.
    I have a large table and I want to load the data on-demand . So I try to implement a provider. But soon I found that the ui:table may be load all data from provider always.
    In TableRowGroup.java, there are many code such as:
    provider.getRowKeys(rowCount, null);
    null will let provider load all data.
    So ui:table can NOT deal with large table!?
    thx.
    fan

    But ui:table just uses TableDataProvider interface.TableData provider is a wrapper for the CachedRowSet
    There are two layers between the UI:Table comonent and the database table: the RowSet layer and the Data Provider layer. The RowSet layer makes the connection to the database, executes the queries, and manages the result set. The Data Provider layer provides a common interface for accessing many types of data, from rowsets, to Array objects, to Enterprise JavaBeans objects.
    Typically, the only time that you work with the RowSet object is when you need to set query parameters. In most other cases, you should use the Data Provider to access and manipulate the data.
    What can a CachedRowSet (or CachedRowSetprovider?)
    do?Check out the API that I pointed you to to see what you can do with a CachedRowSet
    Does the Table cache the data itself?
    Maybe this way is convenient for filter and order?
    Thx.I do not know the answers to these questions.

  • Bursting a report with multiple queries

    Hi,
    I need to set-up bursting in BIP for a report with multiple queries. The output format is pdf and delivery is through e-mail. Since the queries need to be linked, I'm trying to do this using data template. I've set-up split and burst based on a field (store_id) from the first query which is used as a bind variable in the subsequent queries. So I'd ideally want the report to be split based on store_id. The report works fine if I use a parameter for store_id and view the output by store_id. When I try to schedule the report for bursting, it generates and e-mails the reports but only the last report appears to have correct data output while the others have only the output from the first query (and all other queries appear to return nothing in the report).
    Any suggestions on what could be the issue here? Thanks!
    Here is the data template for the report:
    <dataTemplate name="Report" description="Report" dataSourceRef="ReportDB">
    <dataQuery>
    <sqlStatement name="STORE_VENDOR">
    <![CDATA[SELECT STORE.STORE_ID Q1_STORE_ID, VENDOR.VENDOR_ID Q1_VENDOR_ID, VENDOR.VENDOR_DESC Q1_VENDOR_DESC, PERSON.NAME Q1_NAME
                   FROM STORE, SERVICE_TICKET, VENDOR, ROLE, ROLE_TYPE, PERSON, PERIOD PERIOD_FROM, PERIOD PERIOD_TO
                   WHERE (SERVICE_TICKET.SOURCE_ID = STORE.SOURCE_ID AND SERVICE_TICKET.STORE_ID = STORE.STORE_ID)
                   AND (SERVICE_TICKET.SOURCE_ID = VENDOR.SOURCE_ID AND SERVICE_TICKET.VENDOR_ID = VENDOR.VENDOR_ID)
                   AND (STORE.SOURCE_ID = ROLE.SOURCE_ID AND STORE.STORE_ID = ROLE.STORE_ID)
                   AND (ROLE.SOURCE_ID = ROLE_TYPE.SOURCE_ID AND ROLE.ROLE_TYPE_ID = ROLE_TYPE.TYPE_ID AND ROLE_TYPE.TYPE_DESC = 'PIC')
                   AND (ROLE.SOURCE_ID = PERSON.SOURCE_ID AND ROLE.PERSON_ID = PERSON.PERSON_ID)
                   AND SERVICE_TICKET.START_DATE BETWEEN PERIOD_FROM.START_DATE AND PERIOD_TO.END_DATE
                   AND PERIOD_FROM.PERIOD_DESC = 'Cal Week 1'
                   AND PERIOD_TO.PERIOD_DESC = 'Cal Week 4'
                   GROUP BY STORE.STORE_ID, VENDOR.VENDOR_ID, VENDOR.VENDOR_DESC, PERSON.NAME
                   ORDER BY STORE.STORE_ID]]>
    </sqlStatement>
    <sqlStatement name="WO">
    <![CDATA[SELECT STORE.STORE_ID Q3_STORE_ID, 'Nightly Cleaning' Q3_NIGHTLY_CLEANING, 'Nightly Cleaning' Q3_DESCRIPTION, SERVICE_TICKET.TICKET_ID Q3_TICKET_ID, TO_CHAR(SERVICE_TICKET.END_DATE, 'MM/DD/YYYY') Q3_END_DATE, 'Excellent' Q3_QUALITY_RATING_1, 'Satisfactory' Q3_QUALITY_RATING_2, 'Unsatisfactory' Q3_QUALITY_RATING_3
                   FROM SERVICE_TICKET, STORE, PERIOD PERIOD_FROM, PERIOD PERIOD_TO
                   WHERE (SERVICE_TICKET.SOURCE_ID = STORE.SOURCE_ID AND SERVICE_TICKET.STORE_ID = STORE.STORE_ID)
                   AND SERVICE_TICKET.START_DATE BETWEEN PERIOD_FROM.START_DATE AND PERIOD_TO.END_DATE
                   AND STORE.STORE_ID = :Q1_STORE_ID
                   AND PERIOD_FROM.PERIOD_DESC = 'Cal Week 1'
                   AND PERIOD_TO.PERIOD_DESC = 'Cal Week 4']]>
    </sqlStatement>
    <sqlStatement name="SERVC_TYPE_1">
    <![CDATA[SELECT STORE.STORE_ID Q4_STORE_ID, ZONE.ZONE_DESC Q4_ZONE_DESC, SERVICE_TICKET_LINE.SERVICE_COST Q4_SERVICE_COST, SERVICE_TICKET_LINE.TICKET_ID Q4_TICKET_ID, TO_CHAR(SERVICE_TICKET_LINE.WO_END_DATE, 'MM/DD/YYYY') Q4_WO_END_DATE, 'Excellent' Q4_QUALITY_RATING_1, 'Satisfactory' Q4_QUALITY_RATING_2, 'Unsatisfactory' Q4_QUALITY_RATING_3
                   FROM SERVICE_TICKET_LINE, SERVICE_TICKET, STORE, SERVICE_TYPE, ZONE, PERIOD PERIOD_FROM, PERIOD PERIOD_TO
                   WHERE (SERVICE_TICKET.SOURCE_ID = SERVICE_TICKET_LINE.SOURCE_ID AND SERVICE_TICKET.TICKET_ID = SERVICE_TICKET_LINE.TICKET_ID)
                   AND (SERVICE_TICKET.SOURCE_ID = STORE.SOURCE_ID AND SERVICE_TICKET.STORE_ID = STORE.STORE_ID)
                   AND (SERVICE_TICKET_LINE.SOURCE_ID = SERVICE_TYPE.SOURCE_ID AND SERVICE_TICKET_LINE.SERVICE_TYPE_ID = SERVICE_TYPE.TYPE_ID)
                   AND (SERVICE_TICKET_LINE.SOURCE_ID = ZONE.SOURCE_ID AND SERVICE_TICKET_LINE.ZONE_ID = ZONE.ZONE_ID)
                   AND SERVICE_TYPE.TYPE_DESC = 'ABC'
                   AND SERVICE_TICKET.START_DATE BETWEEN PERIOD_FROM.START_DATE AND PERIOD_TO.END_DATE
                   AND STORE.STORE_ID = :Q1_STORE_ID
                   AND PERIOD_FROM.PERIOD_DESC = 'Cal Week 1'
                   AND PERIOD_TO.PERIOD_DESC = 'Cal Week 4'
                   ORDER BY SERVICE_TICKET_LINE.SOURCE_ID, SERVICE_TICKET_LINE.TICKET_ID, SERVICE_TICKET_LINE.LINE_ID]]>
    </sqlStatement>
    <sqlStatement name="SERVC_TYPE_2">
    <![CDATA[SELECT STORE.STORE_ID Q5_STORE_ID, ZONE.ZONE_DESC Q5_ZONE_DESC, SERVICE_TICKET_LINE.SERVICE_COST Q5_SERVICE_COST, SERVICE_TICKET_LINE.TICKET_ID Q5_TICKET_ID, TO_CHAR(SERVICE_TICKET_LINE.WO_END_DATE, 'MM/DD/YYYY') Q5_WO_END_DATE, 'Excellent' Q5_QUALITY_RATING_1, 'Satisfactory' Q5_QUALITY_RATING_2, 'Unsatisfactory' Q5_QUALITY_RATING_3
                   FROM SERVICE_TICKET_LINE, SERVICE_TICKET, STORE, SERVICE_TYPE, ZONE, PERIOD PERIOD_FROM, PERIOD PERIOD_TO
                   WHERE (SERVICE_TICKET.SOURCE_ID = SERVICE_TICKET_LINE.SOURCE_ID AND SERVICE_TICKET.TICKET_ID = SERVICE_TICKET_LINE.TICKET_ID)
                   AND (SERVICE_TICKET.SOURCE_ID = STORE.SOURCE_ID AND SERVICE_TICKET.STORE_ID = STORE.STORE_ID)
                   AND (SERVICE_TICKET_LINE.SOURCE_ID = SERVICE_TYPE.SOURCE_ID AND SERVICE_TICKET_LINE.SERVICE_TYPE_ID = SERVICE_TYPE.TYPE_ID)
                   AND (SERVICE_TICKET_LINE.SOURCE_ID = ZONE.SOURCE_ID AND SERVICE_TICKET_LINE.ZONE_ID = ZONE.ZONE_ID)
                   AND SERVICE_TYPE.TYPE_DESC = 'XYZ'
                   AND SERVICE_TICKET.START_DATE BETWEEN PERIOD_FROM.START_DATE AND PERIOD_TO.END_DATE
                   AND STORE.STORE_ID = :Q1_STORE_ID
                   AND PERIOD_FROM.PERIOD_DESC = 'Cal Week 1'
                   AND PERIOD_TO.PERIOD_DESC = 'Cal Week 4'
                   ORDER BY SERVICE_TICKET_LINE.SOURCE_ID, SERVICE_TICKET_LINE.TICKET_ID, SERVICE_TICKET_LINE.LINE_ID]]>
    </sqlStatement>
    <sqlStatement name="SERVC_TYPE_3">
    <![CDATA[SELECT STORE.STORE_ID Q6_STORE_ID, ZONE.ZONE_DESC Q6_ZONE_DESC, 'Excellent' Q6_QUALITY_RATING_1, 'Satisfactory' Q6_QUALITY_RATING_2, 'Unsatisfactory' Q6_QUALITY_RATING_3, 'One' Q6_QTY_MISSING_1, 'Two' Q6_QTY_MISSING_2, '3 or More' Q6_QTY_MISSING_3
                   FROM SERVICE_TICKET_LINE, SERVICE_TICKET, STORE, SERVICE_TYPE, ZONE, PERIOD PERIOD_FROM, PERIOD PERIOD_TO
                   WHERE (SERVICE_TICKET.SOURCE_ID = SERVICE_TICKET_LINE.SOURCE_ID AND SERVICE_TICKET.TICKET_ID = SERVICE_TICKET_LINE.TICKET_ID)
                   AND (SERVICE_TICKET.SOURCE_ID = STORE.SOURCE_ID AND SERVICE_TICKET.STORE_ID = STORE.STORE_ID)
                   AND (SERVICE_TICKET_LINE.SOURCE_ID = SERVICE_TYPE.SOURCE_ID AND SERVICE_TICKET_LINE.SERVICE_TYPE_ID = SERVICE_TYPE.TYPE_ID)
                   AND (SERVICE_TICKET_LINE.SOURCE_ID = ZONE.SOURCE_ID AND SERVICE_TICKET_LINE.ZONE_ID = ZONE.ZONE_ID)
                   AND SERVICE_TYPE.TYPE_DESC = 'PQR'
                   AND SERVICE_TICKET.START_DATE BETWEEN PERIOD_FROM.START_DATE AND PERIOD_TO.END_DATE
                   AND STORE.STORE_ID = :Q1_STORE_ID
                   AND PERIOD_FROM.PERIOD_DESC = 'Cal Week 1'
                   AND PERIOD_TO.PERIOD_DESC = 'Cal Week 4'
                   ORDER BY SERVICE_TYPE.TYPE_ID]]>
    </sqlStatement>
    </dataQuery>
    <dataStructure>
         <group name="G_STORE_VENDOR" source="STORE_VENDOR">
              <element name="Q1_STORE_ID" value="Q1_STORE_ID"/>
              <element name="Q1_VENDOR_ID" value="Q1_VENDOR_ID"/>
              <element name="Q1_VENDOR_DESC" value="Q1_VENDOR_DESC"/>
              <element name="Q1_NAME" value="Q1_NAME"/>
         </group>     
         <group name="G_WO" source="WO">
              <element name="Q3_NIGHTLY_CLEANING" value="Q3_NIGHTLY_CLEANING"/>
              <element name="Q3_DESCRIPTION" value="Q3_DESCRIPTION"/>
              <element name="Q3_TICKET_ID" value="Q3_TICKET_ID"/>
              <element name="Q3_END_DATE" value="Q3_END_DATE"/>
              <element name="Q3_QUALITY_RATING_1" value="Q3_QUALITY_RATING_1"/>
              <element name="Q3_QUALITY_RATING_2" value="Q3_QUALITY_RATING_2"/>
              <element name="Q3_QUALITY_RATING_3" value="Q3_QUALITY_RATING_3"/>
         </group>     
         <group name="G_SERVC_TYPE_1" source="SERVC_TYPE_1">
              <element name="Q4_ZONE_DESC" value="Q4_ZONE_DESC"/>
              <element name="Q4_SERVICE_COST" value="Q4_SERVICE_COST"/>
              <element name="Q4_TICKET_ID" value="Q4_TICKET_ID"/>
              <element name="Q4_WO_END_DATE" value="Q4_WO_END_DATE"/>
              <element name="Q4_QUALITY_RATING_1" value="Q4_QUALITY_RATING_1"/>
              <element name="Q4_QUALITY_RATING_2" value="Q4_QUALITY_RATING_2"/>
              <element name="Q4_QUALITY_RATING_3" value="Q4_QUALITY_RATING_3"/>
         </group>     
         <group name="G_SERVC_TYPE_2" source="SERVC_TYPE_2">
              <element name="Q5_ZONE_DESC" value="Q5_ZONE_DESC"/>
              <element name="Q5_SERVICE_COST" value="Q5_SERVICE_COST"/>
              <element name="Q5_TICKET_ID" value="Q5_TICKET_ID"/>
              <element name="Q5_WO_END_DATE" value="Q5_WO_END_DATE"/>
              <element name="Q5_QUALITY_RATING_1" value="Q5_QUALITY_RATING_1"/>
              <element name="Q5_QUALITY_RATING_2" value="Q5_QUALITY_RATING_2"/>
              <element name="Q5_QUALITY_RATING_3" value="Q5_QUALITY_RATING_3"/>
         </group>     
         <group name="G_SERVC_TYPE_3" source="SERVC_TYPE_3">
              <element name="Q6_ZONE_DESC" value="Q6_ZONE_DESC"/>
              <element name="Q6_QUALITY_RATING_1" value="Q6_QUALITY_RATING_1"/>
              <element name="Q6_QUALITY_RATING_2" value="Q6_QUALITY_RATING_2"/>
              <element name="Q6_QUALITY_RATING_3" value="Q6_QUALITY_RATING_3"/>
              <element name="Q6_QTY_MISSING_1" value="Q6_QTY_MISSING_1"/>
              <element name="Q6_QTY_MISSING_2" value="Q6_QTY_MISSING_2"/>
              <element name="Q6_QTY_MISSING_3" value="Q6_QTY_MISSING_3"/>
         </group>
    </dataStructure>
    </dataTemplate>

    Hello user6428199,
    When you use ":Q1_STORE_ID" in your queries, BI Publisher looks for a report parameter named Q1_STORE_ID. Change all your report queries to get data for all the store_id's available. As you are bursting the report using the store_id as the bursting key, BI Publisher will split the report based on the stored_id and delivers content based on it. No matter how many queries you may have, BI Publisher will loop through all the store_id's available and all the queries will return data for that particular id in a report.
    Thanks,
    Machaan

  • Cannot resize the albums in album view in Itunes 11.  For those of us with large libraries (160gb), this is extremely frustrating.  I want to make the artwork smaller.  This seems to be a serious oversight.

    Cannot resize the albums in album view in Itunes 11.  For those of us with large libraries (160gb), this is extremely frustrating.  I want to make the artwork smaller.  This seems to be a serious oversight.

    I agree.
    It was very useful to reduce everything to small and quickly scan down looking for missing artwork.
    Particularly useful when I only connect to the machine with iTunes via screen sharing, so everything can be a bit slower to scroll through at times.
    In fact, itunes 11 when used purely as a server for Apple TV etc is a downgrade in usability IMO, as they've got rid of some of the useful views that helped me manage my library remotely.

  • Business Partner records with large numbers of addresses -- Move-in issue

    Friends,
    Our recent CCS implementation (ECC6.0ehp3 & CRM2007) included the creation of some Business Partner records with large numbers of addresses.  Most of these are associated with housing authorities, large developers and large apartment complex owners.  Some of the Business Partners have over 1000 address records and one particular BP has over 6000 addresses that were migrated from our Legacy System.  We are experiencing very long run times to try to execute move in's and move out's due to the system reading the volume of addresses attached to the Business Partner.  In many cases, the system simply times out before it can execute the transaction.  SAP's suggestion is that we run a BAPI to cleanse the addresses and also to implement a BADI to prevent the creation of excess addresses. 
    Two questions surrounding the implementation of this code.  Will the BAPI to cleanse the addresses, wipe out all address records except for the standard address?  That presents an issue to ensure that the standard address on the BP record is the correct address that we will have identified as the proper mailing address.  Second question is around the BADI to prevent the creation of excess addresses.  It looks like this BADI is going to prevent the move in address from updating the standard address on the BP record which in the vast majority of cases is exactly what we would want. 
    Does anyone have any experience with this situation of excess BP addresses and how did you handle the manipulation and cleansing of the data and how do you maintain it going forward?
    Our solution is ECC6.0Ehp3 with CRM2007...latest patch level
    Specifically, SAP suggested we apply/review these notes:
    Note 1249787 - Performance problem during move-in with huge addresses
    **applied this ....did not help
    Note 861528 - Performance in move-in for partner w/ large no of addresses
    **older ISU4.7 note
    Directly from our SAP message:
    use the function module
    BAPI_BUPA_ADDRESS_REMOVE or run BAPI_ISUPARTNER_CHANGE to delete
    unnecessary business partner addresses.
    Use BAdI ISU_MOVEIN_CUSTOMIZE to avoid the creation of unnecessary
    business partner addresses (cf. note 706686) in the future for that
    business partner.
    Note 706686 - Move-in: Avoid unnecessary business partner addresses
    Does anyone have any suggestions and have you used above notes/FMs to resolve something like this?
    Thanks,
    Nick

    Nick:
    One thing to understand is that the badi and bapi are just the tools or mechanisms that will enable you to fix this situation.  You or your development team will need to define the rules under which these tools are used.  Lets take them one at a time.
    BAPI - the bapi for business partner address maintenance.  It would seem that you need to create a program which first read the partners and the addresses assigned to them and then compares these addresses to each other to find duplicate addresses.  These duplicates then can be removed provided they are not used elsewhere in the system (i.e. contract account).
    BADI - the badi for business partner address maintenance.  Here you would need to identify the particular scenarios where addresses should not be copied.  I would expect that most move-ins would meet the criteria of adding the address and changing the standard address.  But for some, i.e. landlords or housing complexes, you might not add an address because it already exists for the business partner, and you might not change the standard address because those accounts do not fall under that scenario.  This will take some thinking and design to ensure that the address add/change functions are executed under the right circumstances.
    regards,
    bill.

  • Out.println() problems with large amount of data in jsp page

    I have this kind of code in my jsp page:
    out.clearBuffer();
    out.println(myText); // size of myText is about 300 kbThe problem is that I manage to print the whole text only sometimes. Very often happens such that the receiving page gets only the first 40 kb and then the printing stops.
    I have made such tests that I split the myText to smaller parts and out.print() them one by one:
    Vector texts = splitTextToSmallerParts(myText);
    for(int i = 0; i < texts.size(); i++) {
      out.print(text.get(i));
      out.flush();
    }This produces the same kind of result. Sometimes all parts are printed but mostly only the first parts.
    I have tried to increase the buffer size but neither that makes the printing reliable. Also I have tried with autoFlush="false" so that I flush before the buffer size gets overflowed; again same result, sometimes works sometimes don't.
    Originally I use such a system where Visual Basic in Excel calls a jsp page. However, I don't think that this matters since the same problems occur if I use a browser.
    If anyone knows something about problems with large jsp pages, I would appreciate that.

    Well, there are many ways you could do this, but it depends on what you are looking for.
    For instance, generating an Excel Spreadsheet could be quite easy:
    import javax.servlet.*;
    import javax.servlet.http.*;
    import java.io.*;
    public class TableTest extends HttpServlet{
         public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
              response.setContentType("application/xls");
              PrintWriter out = new PrintWriter(response.getOutputStream());
                    out.println("Col1\tCol2\tCol3\tCol4");
                    out.println("1\t2\t3\t4");
                    out.println("3\t1\t5\t7");
                    out.println("2\t9\t3\t3");
              out.flush();
              out.close();
    }Just try this simple code, it works just fine... I used the same approach to generate a report of 30000 rows and 40 cols (more or less 5MB), so it should do the job for you.
    Regards

  • Can hard disk be replaced with larger capacity of SSD or SATA disk?

    Weighing on buying a Twist, can I replace the hard disk with larger capacity SSD or SATA disk? Is there any capacity limit on SATA disk? Are SSD and SATA disks compatible for Twist (say replace SSD disk with SATA disk or vice versa?
    Many thanks.

    the Twist's hdd is a 2.5 inch 7 mm drive, which you can upgrade to any other SATA disk of this size either platter or SSD.
    Regards,
    Jin Li
    May this year, be the year of 'DO'!
    I am a volunteer, and not a paid staff of Lenovo or Microsoft

  • How do you apply 2 different types of page (first with large logo and 2nd/follow with small) in the same document?

    i´m using pages 5.1 and i need 2 different types of page in 1 document: first page with large logo and second with a small one. in pages 09 was a option for different first page ...

    that's not my problem, and I am unfortunately not a professional. Sorry.
    In Pages09 there was an option "Different First Page" (like MS Word). I could create different pages (first and second) with two different logos and text boxes. If the document was opened, there was only one side with the large logo. during writing the small logo and the other text boxes appeared on the second page (and subsequent pages).
    Unfortunately I miss this function. I mean the inclusion of items on a second page. However, this must not necessarily be active.

Maybe you are looking for

  • My apple earphones melted in my car today.

    Today I went to my car and reached into the drivers side door compartment to get my apple earphones. I reached in to find that they had melted. I had the sunshade on my car windscreen and the windows partially down cos I drive a black car and wanted

  • Deployment Tool, for 6.40 SP7

    I just installed EP 6.0 SP6 on NetWeaver 6.40 SP7. When run deploytool.bat, a popup window tells error:    There is no disk in the drive. Please insert a disk into drive \Device\Harddisk2\DR6     I have a choice of cancel, try again, continue.   The

  • How to prepare for iPhoto/Aperture shutdown?

    I have almost 50K images in my iPhoto library. I also have Aperture, but really just purchased it to be able to manage libraries over multiple drives. What should I be doing to prepare for the iPhoto/Aperture sunset? Thanks

  • Time Machine crashes when trying to view Mail

    Using Time Machine to backup. I can see on my backup drive that it is producing the incremental backups. In Finder, when I click on TM to view previous times/days, it works fine. I am able to view files and folders that were previously there. I am ab

  • HT201412 the phone keeps acting like its shutting down and won't do anything?

    The phone keeps powering down, then will act like it is rebooting and then powers down again.  It won't show that it's charging.