Sql query slowness due to rank and columns with null values:
Sql query slowness due to rank and columns with null values:
I have the following table in database with around 10 millions records:
Declaration:
create table PropertyOwners (
[Key] int not null primary key,
PropertyKey int not null,
BoughtDate DateTime,
OwnerKey int null,
GroupKey int null
go
[Key] is primary key and combination of PropertyKey, BoughtDate, OwnerKey and GroupKey is unique.
With the following index:
CREATE NONCLUSTERED INDEX [IX_PropertyOwners] ON [dbo].[PropertyOwners]
[PropertyKey] ASC,
[BoughtDate] DESC,
[OwnerKey] DESC,
[GroupKey] DESC
go
Description of the case:
For single BoughtDate one property can belong to multiple owners or single group, for single record there can either be OwnerKey or GroupKey but not both so one of them will be null for each record. I am trying to retrieve the data from the table using
following query for the OwnerKey. If there are same property rows for owners and group at the same time than the rows having OwnerKey with be preferred, that is why I am using "OwnerKey desc" in Rank function.
declare @ownerKey int = 40000
select PropertyKey, BoughtDate, OwnerKey, GroupKey
from (
select PropertyKey, BoughtDate, OwnerKey, GroupKey,
RANK() over (partition by PropertyKey order by BoughtDate desc, OwnerKey desc, GroupKey desc) as [Rank]
from PropertyOwners
) as result
where result.[Rank]=1 and result.[OwnerKey]=@ownerKey
It is taking 2-3 seconds to get the records which is too slow, similar time it is taking as I try to get the records using the GroupKey. But when I tried to get the records for the PropertyKey with the same query, it is executing in 10 milliseconds.
May be the slowness is due to as OwnerKey/GroupKey in the table can be null and sql server in unable to index it. I have also tried to use the Indexed view to pre ranked them but I can't use it in my query as Rank function is not supported in indexed
view.
Please note this table is updated once a day and using Sql Server 2008 R2. Any help will be greatly appreciated.
create table #result (PropertyKey int not null, BoughtDate datetime, OwnerKey int null, GroupKey int null, [Rank] int not null)Create index idx ON #result(OwnerKey ,rnk)
insert into #result(PropertyKey, BoughtDate, OwnerKey, GroupKey, [Rank])
select PropertyKey, BoughtDate, OwnerKey, GroupKey,
RANK() over (partition by PropertyKey order by BoughtDate desc, OwnerKey desc, GroupKey desc) as [Rank]
from PropertyOwners
go
declare @ownerKey int = 1
select PropertyKey, BoughtDate, OwnerKey, GroupKey
from #result as result
where result.[Rank]=1
and result.[OwnerKey]=@ownerKey
go
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence
Similar Messages
-
To overcome column with null value-urgent
hai all,
when i query i get column with null value.
how to solve it?
thank in advance.
rcs
SQL> DESC SCOTT.CB1;
Name Null? Type
ID NUMBER
SUPCODE NUMBER
SUPLNAME VARCHAR2(100)
NAME VARCHAR2(100)
ITEMCODE VARCHAR2(10)
RECDOC NUMBER
RECDATE VARCHAR2(10)
TOTVALUE NUMBER
QTY NUMBER
CB_IPNO NUMBER
CB_VNNO NUMBER
CB_VDT VARCHAR2(10)
CB_AMT NUMBER
RECDOC_GR VARCHAR2(30)
RECDATE_GR DATE
SUPCODE_GR VARCHAR2(10)
TABLE LOOK LIKE THIS (NOT ALL DATA IN SAME ROW, BECUSE I INSERTED LAST 3 COLUMN VALUES):
ID SUPCODE SUPLNAME NAME ITEMCODE RECDOC RECDATE TOTVALUE QTY CB_IPNO CB_VNNO CB_VDT CB_AMT RECDOC_GR RECDATE_GR SUPCODE_GR
2015 AAAA 04117 9083 10545.6 78
2016 BBBB 04609 9087 25200 3600
2017 GGGG 04609 9088 28175 4025
2018 36591371.64 2565017.27
00001/07-08 02/04/2007 14020362
00002/07-08 02/04/2007 14020362
00003/07-08 02/04/2007 14010254
00004/07-08 02/04/2007 14010254
00005/07-08 02/04/2007 14021458
SQL> SELECT DISTINCT ID, SUPCODE_GR, NAME, ITEMCODE, RECDOC, RECDATE_GR, TOTVALUE, QTY FROM SCOTT.CB
1;
ID SUPCODE_GR
NAME
ITEMCODE RECDOC RECDATE_G TOTVALUE QTY
1
PRO.AT.ALU.POWDER UNCOATED
04609 15 51975 7425
2
PEN, GEL PEN
07969 17 154 11
ID SUPCODE_GR
I NEED RESULT AS FOLLOWS (ALL RESPECTIVE DDATA IN ONE LINE NOW NOT LIKE THAT):
ID SUPCODE SUPLNAME NAME ITEMCODE RECDOC RECDATE TOTVALUE QTY CB_IPNO CB_VNNO CB_VDT CB_AMT RECDOC_GR RECDATE_GR SUPCODE_GR
2015 AAAA 04117 9083 10545.6 78 00001/07-08 02/04/2007 14020362
============Even accounting for the formatting, I'm not sure I even understand the question. It could be any number of different problems or non-problems.
-
How to take the Average of a DATEDIFF column with NULL values?
I am building an SSRS report that can display the average of a calculated datediff column in dd/hh/mm format with the following formula:
=Avg(IIF(Fields!LastCorrectedDate.Value is nothing,0, DATEDIFF("n",cdate(Fields!LastCorrectedDate.Value),cdate(Fields!
LastSignDate.Value)) \(60*24) & ":" & DATEDIFF("n",cdate(Fields!LastCorrectedDate.Value),cdate(Fields!
LastSignDate.Value)) mod (60*24)\60 & ":" & DATEDIFF("n",cdate(Fields!LastCorrectedDate.Value),cdate(Fields!
LastSignDate.Value)) mod (60*24) - (((DATEDIFF("n",cdate(Fields!LastCorrectedDate.Value),cdate(Fields!
LastSignDate.Value)) mod (60*24))\60)*60) ))
SSRS does not raise any errors with the formula and I have used the same formula for other columns without issue. I have noticed that this column includes null values which I think may be the problem. When the reports runs, it returns #ERROR on the column
but does not give a reason why. I am using SSRS report builder with visual basic logic as opposed to embedding SQL. Any help or feedback would be greatly appreciated.Hi No Ragrets,
According to your description, you want to calculate the average for the date time difference. Right?
In Reporting Services, Avg() function is only available for numeric values. In this scenario, the DateDiff() function to calculate the minutes difference will return a number. So we can do average calculation based on the return values first. Then we format
it as a time. We have tested this case in our local environment. Please try the following expression:
=floor(avg(DateDiff("n",Fields!StartDate.Value,Fields!EndDate.Value))) \(24*60) &":"&
floor(avg(DateDiff("n",Fields!StartDate.Value,Fields!EndDate.Value))/60 mod 24 )&":"&
floor(avg(DateDiff("n",Fields!StartDate.Value,Fields!EndDate.Value))) mod 60
The result looks like below:
If you have any question, please feel free to ask.
Best Regards,
Simon Hou -
Sql query slow due to case statement on Joins
Hi
The sql query runs very slow for 30 min when the below case statement is added on the joins. Could you please let me know how to tune it. if the case statement is not there then it runs only for 1 min.
*( CASE*
WHEN PO_DIST_GL_CODE_COMB.SEGMENT2 <> '1000'
THEN PO_DIST_GL_CODE_COMB.SEGMENT1 || PO_DIST_GL_CODE_COMB.SEGMENT2 || '_' || NVL(PO_DIST_GL_CODE_COMB.SEGMENT6,'000')
WHEN DT_REQ_ALL.EMPMGMTCD IS NOT NULL AND
PO_DIST_GL_CODE_COMB.SEGMENT2 = '1000'
THEN DT_REQ_ALL.EMPMGMTCD
END =DB2.DB2_FDW_MGMT_V.MH_CHILD )
SELECT DISTINCT
D.DB2_FDW_MGMT_V.RC_PARENT,
DT_REQ_ALL.FULL_NAME,
DT_REQ_ALL.EMP_COMPANY_CODE,
DT_REQ_ALL.EMP_COST_CENTER,
PO.PO_VENDORS.VENDOR_NAME,
PO_PO_HEADERS_ALL2.SEGMENT1,
PO_PO_HEADERS_ALL2.CREATION_DATE,
PO_DIST_GL_CODE_COMB.SEGMENT1,
PO_DIST_GL_CODE_COMB.SEGMENT2,
PO_PO_HEADERS_ALL2.CURRENCY_CODE,
PO_INV_DIST_ALL.INVOICE_NUM,
PO_INV_DIST_ALL.INVOICE_DATE,
(PO_INV_DIST_ALL.INVOICE_AMOUNT* PO_Rates_GL_DR.CONVERSION_RATE),
(NVL(to_number(PO_DIST_ALL.AMOUNT_BILLED),0) * PO_Rates_GL_DR.CONVERSION_RATE),
PO_LINES_LOC.LINE_NUM,
GL.GL_SETS_OF_BOOKS.NAME,
CASE
WHEN TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE) > PO_INV_DIST_ALL.INVOICE_DATE
THEN 1
ELSE 0
END ,
PO.PO_REQUISITION_LINES_ALL.LINE_LOCATION_ID,
TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE,'WW') + 8 WEEK_Ending
FROM
DB2.DB2_FDW_MGMT_V,
PO.PO_VENDORS,
PO.PO_HEADERS_ALL PO_PO_HEADERS_ALL2,
GL.GL_CODE_COMBINATIONS PO_DIST_GL_CODE_COMB,
AP.AP_INVOICES_ALL PO_INV_DIST_ALL,
PO.PO_DISTRIBUTIONS_ALL PO_DIST_ALL,
PO.PO_LINES_ALL PO_LINES_LOC,
GL.GL_SETS_OF_BOOKS,
PO.PO_REQUISITION_LINES_ALL,
PO.PO_LINE_LOCATIONS_ALL,
AP.AP_INVOICE_DISTRIBUTIONS_ALL PO_DIST_INV_DIST_ALL,
APPS.HR_OPERATING_UNITS,
PO.PO_REQ_DISTRIBUTIONS_ALL,
SELECT DISTINCT
PO_RDA.DISTRIBUTION_ID,
PO_RLA.requisition_line_id,
PO_RHA.DESCRIPTION PO_Descr,
PO_RHA.NOTE_TO_AUTHORIZER PO_Justification,
Req_Emp.FULL_NAME,
GL_CC.SEGMENT1 Req_Company_Code,
GL_CC.SEGMENT2 Req_Cost_Center,
Req_Emp_CC.SEGMENT1 Emp_Company_Code,
Req_Emp_CC.SEGMENT2 Emp_Cost_Center,
(Case
When GL_CC.SEGMENT2 <> 8000
Then TRUNC(GL_CC.SEGMENT1) || TRUNC(GL_CC.SEGMENT2) || '_' || NVL(GL_CC.SEGMENT6,'000')
Else TRUNC(Req_Emp_CC.SEGMENT1) || TRUNC(Req_Emp_CC.SEGMENT2) || '_' || NVL(Req_Emp_CC.SEGMENT6,'000')
End) EmpMgmtCD
FROM
PO.po_requisition_lines_all PO_rla,
PO.po_requisition_headers_all PO_rha,
PO.PO_REQ_DISTRIBUTIONS_ALL po_RDA,
GL.GL_CODE_COMBINATIONS gl_cc,
HR.PER_ALL_PEOPLE_F Req_Emp,
HR.PER_ALL_ASSIGNMENTS_F Req_Emp_Assign,
HR.hr_all_organization_units Req_Emp_Org,
HR.pay_cost_allocation_keyflex Req_Emp_CC
WHERE
PO_RDA.CODE_COMBINATION_ID = GL_CC.CODE_COMBINATION_ID and
PO_RLA.REQUISITION_LINE_ID = PO_RDA.REQUISITION_LINE_ID AND
PO_RLA.to_person_id = Req_Emp.PERSON_ID AND
PO_RLA.REQUISITION_HEADER_ID = PO_RHA.REQUISITION_HEADER_ID AND
(trunc(PO_rla.CREATION_DATE) between Req_Emp.effective_start_date and Req_Emp.effective_end_date OR
Req_Emp.effective_start_date IS NULL) AND
Req_Emp.PERSON_ID = Req_Emp_Assign.PERSON_ID AND
Req_Emp_Assign.organization_id = Req_Emp_Org.organization_id AND
(trunc(PO_rla.CREATION_DATE) between Req_Emp_Assign.effective_start_date and Req_Emp_Assign.effective_end_date OR
Req_Emp_Assign.effective_start_date IS NULL) AND
Req_Emp_Assign.primary_flag = 'Y' AND
Req_Emp_Assign.assignment_type = 'E' AND
Req_Emp_Org.cost_allocation_keyflex_id = Req_Emp_CC.cost_allocation_keyflex_id
) DT_REQ_ALL,
SELECT
FROM_CURRENCY,
TO_CURRENCY,
CONVERSION_DATE,
CONVERSION_RATE
FROM GL.GL_DAILY_RATES
UNION
SELECT Distinct
'USD',
'USD',
CONVERSION_DATE,
1
FROM GL.GL_DAILY_RATES
) PO_Rates_GL_DR
WHERE
( PO_DIST_GL_CODE_COMB.CODE_COMBINATION_ID=PO_DIST_ALL.CODE_COMBINATION_ID )
AND ( PO_DIST_ALL.LINE_LOCATION_ID=PO.PO_LINE_LOCATIONS_ALL.LINE_LOCATION_ID )
AND ( PO_PO_HEADERS_ALL2.VENDOR_ID=PO.PO_VENDORS.VENDOR_ID )
AND ( PO_PO_HEADERS_ALL2.ORG_ID=APPS.HR_OPERATING_UNITS.ORGANIZATION_ID )
AND ( GL.GL_SETS_OF_BOOKS.SET_OF_BOOKS_ID=APPS.HR_OPERATING_UNITS.SET_OF_BOOKS_ID )
AND ( PO_PO_HEADERS_ALL2.CURRENCY_CODE=PO_Rates_GL_DR.FROM_CURRENCY )
AND ( trunc(PO_PO_HEADERS_ALL2.CREATION_DATE)=PO_Rates_GL_DR.CONVERSION_DATE )
AND ( PO_DIST_ALL.REQ_DISTRIBUTION_ID=PO.PO_REQ_DISTRIBUTIONS_ALL.DISTRIBUTION_ID(+) )
AND ( PO.PO_REQ_DISTRIBUTIONS_ALL.REQUISITION_LINE_ID=PO.PO_REQUISITION_LINES_ALL.REQUISITION_LINE_ID(+) )
AND ( PO_LINES_LOC.PO_HEADER_ID=PO_PO_HEADERS_ALL2.PO_HEADER_ID )
AND ( PO.PO_LINE_LOCATIONS_ALL.PO_LINE_ID=PO_LINES_LOC.PO_LINE_ID )
AND ( PO_DIST_ALL.REQ_DISTRIBUTION_ID=DT_REQ_ALL.DISTRIBUTION_ID(+) )
AND ( PO_DIST_ALL.PO_DISTRIBUTION_ID=PO_DIST_INV_DIST_ALL.PO_DISTRIBUTION_ID(+) )
AND ( PO_INV_DIST_ALL.INVOICE_ID(+)=PO_DIST_INV_DIST_ALL.INVOICE_ID )
AND ( PO_INV_DIST_ALL.SOURCE(+) <> 'XML GATEWAY' )
AND
( NVL(PO_PO_HEADERS_ALL2.CANCEL_FLAG,'N') <> 'Y' )
AND
( NVL(PO_PO_HEADERS_ALL2.CLOSED_CODE, 'OPEN') <> 'FINALLY CLOSED' )
AND
( NVL(PO_PO_HEADERS_ALL2.AUTHORIZATION_STATUS,'IN PROCESS') <> 'REJECTED' )
AND
( TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE) BETWEEN TO_DATE('01-jan-2011') AND TO_DATE('04-jan-2011') )
AND
PO_Rates_GL_DR.TO_CURRENCY = 'USD'
AND
DB2.DB2_FDW_MGMT_V.RC_PARENT In ( 'Unavailable','Corp','Commercial' )
AND
( CASE
WHEN PO_DIST_GL_CODE_COMB.SEGMENT2 <> '1000'
THEN PO_DIST_GL_CODE_COMB.SEGMENT1 || PO_DIST_GL_CODE_COMB.SEGMENT2 || '_' || NVL(PO_DIST_GL_CODE_COMB.SEGMENT6,'000')
WHEN DT_REQ_ALL.EMPMGMTCD IS NOT NULL AND
PO_DIST_GL_CODE_COMB.SEGMENT2 = '1000'
THEN DT_REQ_ALL.EMPMGMTCD
END =DB2.DB2_FDW_MGMT_V.MH_CHILD )Explain plan. sorry can't get the explain plan from sql. this is from toad.
Plan
SELECT STATEMENT ALL_ROWSCost: 53,932 Bytes: 2,607 Cardinality: 1
79 HASH UNIQUE Cost: 53,932 Bytes: 2,607 Cardinality: 1
78 NESTED LOOPS OUTER Cost: 53,931 Bytes: 2,607 Cardinality: 1
75 NESTED LOOPS OUTER Cost: 53,928 Bytes: 2,560 Cardinality: 1
72 NESTED LOOPS Cost: 53,902 Bytes: 2,552 Cardinality: 1
69 NESTED LOOPS OUTER Cost: 53,900 Bytes: 2,533 Cardinality: 1
66 NESTED LOOPS OUTER Cost: 53,898 Bytes: 2,521 Cardinality: 1
63 HASH JOIN OUTER Cost: 53,896 Bytes: 2,509 Cardinality: 1
40 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_DISTRIBUTIONS_ALL Cost: 3 Bytes: 26 Cardinality: 1
39 NESTED LOOPS Cost: 17,076 Bytes: 2,400 Cardinality: 1
37 NESTED LOOPS Cost: 17,073 Bytes: 2,374 Cardinality: 1
34 NESTED LOOPS Cost: 17,070 Bytes: 2,362 Cardinality: 1
31 NESTED LOOPS Cost: 17,066 Bytes: 2,347 Cardinality: 1
29 NESTED LOOPS Cost: 17,066 Bytes: 2,339 Cardinality: 1
26 NESTED LOOPS Cost: 17,065 Bytes: 2,312 Cardinality: 1
23 NESTED LOOPS Cost: 17,064 Bytes: 2,287 Cardinality: 1
20 NESTED LOOPS Cost: 17,062 Bytes: 2,261 Cardinality: 1
17 NESTED LOOPS Cost: 17,056 Bytes: 6,678 Cardinality: 3
15 HASH JOIN Cost: 17,056 Bytes: 6,663 Cardinality: 3
13 MERGE JOIN CARTESIAN Cost: 135 Bytes: 30,352 Cardinality: 14
5 VIEW VIEW DB2.DB2_FDW_MGMT_V Cost: 4 Bytes: 2,128 Cardinality: 1
4 SORT UNIQUE Cost: 4 Cardinality: 1
3 UNION-ALL
1 REMOTE REMOTE SERIAL_FROM_REMOTE PRDFDW.WORLD
2 FAST DUAL Cost: 3 Cardinality: 1
12 BUFFER SORT Cost: 135 Bytes: 560 Cardinality: 14
11 VIEW DB2. Cost: 131 Bytes: 560 Cardinality: 14
10 SORT UNIQUE Cost: 131 Bytes: 310 Cardinality: 14
9 UNION-ALL
7 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_DAILY_RATES Cost: 65 Bytes: 270 Cardinality: 9
6 INDEX SKIP SCAN INDEX (UNIQUE) GL.GL_DAILY_RATES_U1 Cost: 64 Cardinality: 1
8 INDEX SKIP SCAN INDEX (UNIQUE) GL.GL_DAILY_RATES_U1 Cost: 64 Bytes: 4,368 Cardinality: 546
14 TABLE ACCESS FULL TABLE PO.PO_HEADERS_ALL Cost: 16,920 Bytes: 32,754 Cardinality: 618
16 INDEX UNIQUE SCAN INDEX (UNIQUE) HR.HR_ORGANIZATION_UNITS_PK Cost: 0 Bytes: 5 Cardinality: 1
19 TABLE ACCESS BY INDEX ROWID TABLE HR.HR_ORGANIZATION_INFORMATION Cost: 2 Bytes: 35 Cardinality: 1
18 INDEX RANGE SCAN INDEX HR.HR_ORGANIZATION_INFORMATIO_FK2 Cost: 1 Cardinality: 2
22 TABLE ACCESS BY INDEX ROWID TABLE HR.HR_ORGANIZATION_INFORMATION Cost: 2 Bytes: 26 Cardinality: 1
21 INDEX RANGE SCAN INDEX HR.HR_ORGANIZATION_INFORMATIO_FK2 Cost: 1 Cardinality: 1
25 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_SETS_OF_BOOKS Cost: 1 Bytes: 25 Cardinality: 1
24 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_SETS_OF_BOOKS_U2 Cost: 0 Cardinality: 1
28 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_VENDORS Cost: 1 Bytes: 27 Cardinality: 1
27 INDEX UNIQUE SCAN INDEX (UNIQUE) PO.PO_VENDORS_U1 Cost: 0 Cardinality: 1
30 INDEX UNIQUE SCAN INDEX (UNIQUE) HR.HR_ALL_ORGANIZATION_UNTS_TL_PK Cost: 0 Bytes: 8 Cardinality: 1
33 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_LINES_ALL Cost: 4 Bytes: 60 Cardinality: 4
32 INDEX RANGE SCAN INDEX (UNIQUE) PO.PO_LINES_U2 Cost: 2 Cardinality: 4
36 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_LINE_LOCATIONS_ALL Cost: 3 Bytes: 12 Cardinality: 1
35 INDEX RANGE SCAN INDEX PO.PO_LINE_LOCATIONS_N1 Cost: 2 Cardinality: 1
38 INDEX RANGE SCAN INDEX PO.PO_DISTRIBUTIONS_N1 Cost: 2 Cardinality: 1
62 VIEW DB2. Cost: 36,819 Bytes: 1,090 Cardinality: 10
61 HASH UNIQUE Cost: 36,819 Bytes: 2,580 Cardinality: 10
60 NESTED LOOPS Cost: 36,818 Bytes: 2,580 Cardinality: 10
57 NESTED LOOPS Cost: 36,798 Bytes: 2,390 Cardinality: 10
54 NESTED LOOPS Cost: 36,768 Bytes: 2,220 Cardinality: 10
51 NESTED LOOPS Cost: 36,758 Bytes: 1,510 Cardinality: 10
48 NESTED LOOPS Cost: 36,747 Bytes: 1,050 Cardinality: 10
45 HASH JOIN Cost: 36,737 Bytes: 960 Cardinality: 10
43 HASH JOIN Cost: 34,602 Bytes: 230,340 Cardinality: 3,490
41 TABLE ACCESS FULL TABLE HR.PER_ALL_PEOPLE_F Cost: 1,284 Bytes: 1,848,420 Cardinality: 44,010
42 TABLE ACCESS FULL TABLE PO.PO_REQUISITION_LINES_ALL Cost: 31,802 Bytes: 18,340,080 Cardinality: 764,170
44 TABLE ACCESS FULL TABLE HR.PER_ALL_ASSIGNMENTS_F Cost: 2,134 Bytes: 822,540 Cardinality: 27,418
47 TABLE ACCESS BY INDEX ROWID TABLE HR.HR_ALL_ORGANIZATION_UNITS Cost: 1 Bytes: 9 Cardinality: 1
46 INDEX UNIQUE SCAN INDEX (UNIQUE) HR.HR_ORGANIZATION_UNITS_PK Cost: 0 Cardinality: 1
50 TABLE ACCESS BY INDEX ROWID TABLE HR.PAY_COST_ALLOCATION_KEYFLEX Cost: 1 Bytes: 46 Cardinality: 1
49 INDEX UNIQUE SCAN INDEX (UNIQUE) HR.PAY_COST_ALLOCATION_KEYFLE_PK Cost: 0 Cardinality: 1
53 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_REQUISITION_HEADERS_ALL Cost: 1 Bytes: 71 Cardinality: 1
52 INDEX UNIQUE SCAN INDEX (UNIQUE) PO.PO_REQUISITION_HEADERS_U1 Cost: 0 Cardinality: 1
56 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_REQ_DISTRIBUTIONS_ALL Cost: 3 Bytes: 17 Cardinality: 1
55 INDEX RANGE SCAN INDEX PO.PO_REQ_DISTRIBUTIONS_N1 Cost: 2 Cardinality: 1
59 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 2 Bytes: 19 Cardinality: 1
58 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1
65 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_REQ_DISTRIBUTIONS_ALL Cost: 2 Bytes: 12 Cardinality: 1
64 INDEX UNIQUE SCAN INDEX (UNIQUE) PO.PO_REQ_DISTRIBUTIONS_U1 Cost: 1 Cardinality: 1
68 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_REQUISITION_LINES_ALL Cost: 2 Bytes: 12 Cardinality: 1
67 INDEX UNIQUE SCAN INDEX (UNIQUE) PO.PO_REQUISITION_LINES_U1 Cost: 1 Cardinality: 1
71 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 2 Bytes: 19 Cardinality: 1
70 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1
74 TABLE ACCESS BY INDEX ROWID TABLE AP.AP_INVOICE_DISTRIBUTIONS_ALL Cost: 26 Bytes: 16 Cardinality: 2
73 INDEX RANGE SCAN INDEX AP.AP_INVOICE_DISTRIBUTIONS_N7 Cost: 2 Cardinality: 37
77 TABLE ACCESS BY INDEX ROWID TABLE AP.AP_INVOICES_ALL Cost: 3 Bytes: 47 Cardinality: 1
76 INDEX RANGE SCAN INDEX (UNIQUE) AP.AP_INVOICES_U1 Cost: 2 Cardinality: 1 ThanksForming a new table "new_table" with 3 tables which particiapate in CASE statement logic.
with DT_REQ_ALL as
SELECT DISTINCT
PO_RDA.DISTRIBUTION_ID,
PO_RLA.requisition_line_id,
PO_RHA.DESCRIPTION PO_Descr,
PO_RHA.NOTE_TO_AUTHORIZER PO_Justification,
Req_Emp.FULL_NAME,
GL_CC.SEGMENT1 Req_Company_Code,
GL_CC.SEGMENT2 Req_Cost_Center,
Req_Emp_CC.SEGMENT1 Emp_Company_Code,
Req_Emp_CC.SEGMENT2 Emp_Cost_Center,
(Case
When GL_CC.SEGMENT2 8000
Then TRUNC(GL_CC.SEGMENT1) || TRUNC(GL_CC.SEGMENT2) || '_' || NVL(GL_CC.SEGMENT6,'000')
Else TRUNC(Req_Emp_CC.SEGMENT1) || TRUNC(Req_Emp_CC.SEGMENT2) || '_' || NVL(Req_Emp_CC.SEGMENT6,'000')
End) EmpMgmtCD
FROM
PO.po_requisition_lines_all PO_rla,
PO.po_requisition_headers_all PO_rha,
PO.PO_REQ_DISTRIBUTIONS_ALL po_RDA,
GL.GL_CODE_COMBINATIONS gl_cc,
HR.PER_ALL_PEOPLE_F Req_Emp,
HR.PER_ALL_ASSIGNMENTS_F Req_Emp_Assign,
HR.hr_all_organization_units Req_Emp_Org,
HR.pay_cost_allocation_keyflex Req_Emp_CC
WHERE
PO_RDA.CODE_COMBINATION_ID = GL_CC.CODE_COMBINATION_ID and
PO_RLA.REQUISITION_LINE_ID = PO_RDA.REQUISITION_LINE_ID AND
PO_RLA.to_person_id = Req_Emp.PERSON_ID AND
PO_RLA.REQUISITION_HEADER_ID = PO_RHA.REQUISITION_HEADER_ID AND
(trunc(PO_rla.CREATION_DATE) between Req_Emp.effective_start_date and Req_Emp.effective_end_date OR
Req_Emp.effective_start_date IS NULL) AND
Req_Emp.PERSON_ID = Req_Emp_Assign.PERSON_ID AND
Req_Emp_Assign.organization_id = Req_Emp_Org.organization_id AND
(trunc(PO_rla.CREATION_DATE) between Req_Emp_Assign.effective_start_date and Req_Emp_Assign.effective_end_date OR
Req_Emp_Assign.effective_start_date IS NULL) AND
Req_Emp_Assign.primary_flag = 'Y' AND
Req_Emp_Assign.assignment_type = 'E' AND
Req_Emp_Org.cost_allocation_keyflex_id = Req_Emp_CC.cost_allocation_keyflex_id
SELECT DISTINCT
D.DB2_FDW_MGMT_V.RC_PARENT,
DT_REQ_ALL.FULL_NAME,
DT_REQ_ALL.EMP_COMPANY_CODE,
DT_REQ_ALL.EMP_COST_CENTER,
PO.PO_VENDORS.VENDOR_NAME,
PO_PO_HEADERS_ALL2.SEGMENT1,
PO_PO_HEADERS_ALL2.CREATION_DATE,
PO_DIST_GL_CODE_COMB.SEGMENT1,
PO_DIST_GL_CODE_COMB.SEGMENT2,
PO_PO_HEADERS_ALL2.CURRENCY_CODE,
PO_INV_DIST_ALL.INVOICE_NUM,
PO_INV_DIST_ALL.INVOICE_DATE,
(PO_INV_DIST_ALL.INVOICE_AMOUNT* PO_Rates_GL_DR.CONVERSION_RATE),
(NVL(to_number(PO_DIST_ALL.AMOUNT_BILLED),0) * PO_Rates_GL_DR.CONVERSION_RATE),
PO_LINES_LOC.LINE_NUM,
GL.GL_SETS_OF_BOOKS.NAME,
CASE
WHEN TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE) > PO_INV_DIST_ALL.INVOICE_DATE
THEN 1
ELSE 0
END ,
PO.PO_REQUISITION_LINES_ALL.LINE_LOCATION_ID,
TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE,'WW') + 8 WEEK_Ending
FROM
( SELECT * FROM
DB2.DB2_FDW_MGMT_V,
GL.GL_CODE_COMBINATIONS PO_DIST_GL_CODE_COMB,
DT_REQ_ALL
WHERE
DB2.DB2_FDW_MGMT_V.RC_PARENT In ( 'Unavailable','Corp','Commercial' )
AND
CASE
WHEN PO_DIST_GL_CODE_COMB.SEGMENT2 <> '1000'
THEN PO_DIST_GL_CODE_COMB.SEGMENT1 || PO_DIST_GL_CODE_COMB.SEGMENT2 || '_' || NVL(PO_DIST_GL_CODE_COMB.SEGMENT6,'000')
WHEN DT_REQ_ALL.EMPMGMTCD IS NOT NULL AND
PO_DIST_GL_CODE_COMB.SEGMENT2 = '1000'
THEN DT_REQ_ALL.EMPMGMTCD
END =DB2.DB2_FDW_MGMT_V.MH_CHILD
) new_table,
PO.PO_VENDORS,
PO.PO_HEADERS_ALL PO_PO_HEADERS_ALL2,
AP.AP_INVOICES_ALL PO_INV_DIST_ALL,
PO.PO_DISTRIBUTIONS_ALL PO_DIST_ALL,
PO.PO_LINES_ALL PO_LINES_LOC,
GL.GL_SETS_OF_BOOKS,
PO.PO_REQUISITION_LINES_ALL,
PO.PO_LINE_LOCATIONS_ALL,
AP.AP_INVOICE_DISTRIBUTIONS_ALL PO_DIST_INV_DIST_ALL,
APPS.HR_OPERATING_UNITS,
PO.PO_REQ_DISTRIBUTIONS_ALL,
SELECT
FROM_CURRENCY,
TO_CURRENCY,
CONVERSION_DATE,
CONVERSION_RATE
FROM GL.GL_DAILY_RATES
UNION
SELECT Distinct
'USD',
'USD',
CONVERSION_DATE,
1
FROM GL.GL_DAILY_RATES
) PO_Rates_GL_DR
WHERE
( PO_DIST_GL_CODE_COMB.CODE_COMBINATION_ID=PO_DIST_ALL.CODE_COMBINATION_ID )
AND ( PO_DIST_ALL.LINE_LOCATION_ID=PO.PO_LINE_LOCATIONS_ALL.LINE_LOCATION_ID )
AND ( PO_PO_HEADERS_ALL2.VENDOR_ID=PO.PO_VENDORS.VENDOR_ID )
AND ( PO_PO_HEADERS_ALL2.ORG_ID=APPS.HR_OPERATING_UNITS.ORGANIZATION_ID )
AND ( GL.GL_SETS_OF_BOOKS.SET_OF_BOOKS_ID=APPS.HR_OPERATING_UNITS.SET_OF_BOOKS_ID )
AND ( PO_PO_HEADERS_ALL2.CURRENCY_CODE=PO_Rates_GL_DR.FROM_CURRENCY )
AND ( trunc(PO_PO_HEADERS_ALL2.CREATION_DATE)=PO_Rates_GL_DR.CONVERSION_DATE )
AND ( PO_DIST_ALL.REQ_DISTRIBUTION_ID=PO.PO_REQ_DISTRIBUTIONS_ALL.DISTRIBUTION_ID(+) )
AND ( PO.PO_REQ_DISTRIBUTIONS_ALL.REQUISITION_LINE_ID=PO.PO_REQUISITION_LINES_ALL.REQUISITION_LINE_ID(+) )
AND ( PO_LINES_LOC.PO_HEADER_ID=PO_PO_HEADERS_ALL2.PO_HEADER_ID )
AND ( PO.PO_LINE_LOCATIONS_ALL.PO_LINE_ID=PO_LINES_LOC.PO_LINE_ID )
AND ( PO_DIST_ALL.REQ_DISTRIBUTION_ID=DT_REQ_ALL.DISTRIBUTION_ID(+) )
AND ( PO_DIST_ALL.PO_DISTRIBUTION_ID=PO_DIST_INV_DIST_ALL.PO_DISTRIBUTION_ID(+) )
AND ( PO_INV_DIST_ALL.INVOICE_ID(+)=PO_DIST_INV_DIST_ALL.INVOICE_ID )
AND ( PO_INV_DIST_ALL.SOURCE(+) 'XML GATEWAY' )
AND
( NVL(PO_PO_HEADERS_ALL2.CANCEL_FLAG,'N') 'Y' )
AND
( NVL(PO_PO_HEADERS_ALL2.CLOSED_CODE, 'OPEN') 'FINALLY CLOSED' )
AND
( NVL(PO_PO_HEADERS_ALL2.AUTHORIZATION_STATUS,'IN PROCESS') 'REJECTED' )
AND
( TRUNC(PO_PO_HEADERS_ALL2.CREATION_DATE) BETWEEN TO_DATE('01-jan-2011') AND TO_DATE('04-jan-2011') )
AND
PO_Rates_GL_DR.TO_CURRENCY = 'USD'
-
How to select columns with null values
HI
In my table ‘A’ I have 10 columns and 30,000 records. I need all those columns whose value is null for all the records.
For example in the below table column 'suffix' is null for all the records. So I want column suffix to be selected.
Name Suffix Street
James 1100 Washington street
Richard 273 GEORGIA ST
Arnold 3018 OAKHILL AVE
MICHAEL 834 E 161ST ST
Joseph 410 PINE AVE
Thanks in advanceTrue...
But, i think here null needs to handle otherwise it will again throw some error like this ->
satyaki>
satyaki>select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Prod
PL/SQL Release 10.2.0.3.0 - Production
CORE 10.2.0.3.0 Production
TNS for 32-bit Windows: Version 10.2.0.3.0 - Production
NLSRTL Version 10.2.0.3.0 - Production
Elapsed: 00:00:00.20
satyaki>
satyaki>
satyaki>
satyaki>SELECT owner, table_name, column_name
2 FROM all_tab_cols
3 WHERE data_type NOT IN ('BLOB', 'LONG', 'CLOB')
4 AND dbms_xmlgen.getxmltype('select count('
5 || CASE
6 WHEN data_type NOT IN
7 ('BLOB', 'LONG', 'CLOB')
8 THEN
9 column_name
10 ELSE
11 '1'
12 END
13 || ') c from '
14 || owner
15 || '.'
16 || table_name).EXTRACT (
17 '//text()'
18 ).getnumberval () = 0
19 AND table_name IN
20 (SELECT table_name
21 FROM all_tab_privs
22 WHERE privilege = 'SELECT' AND USER IN (grantor, grantee));
ERROR:
ORA-19202: Error occurred in XML processing
ORA-24347: Warning of a NULL column in an aggregate function
ORA-06512: at "SYS.DBMS_XMLGEN", line 288
ORA-06512: at line 1
no rows selected
Elapsed: 00:00:04.03
satyaki>Or,
satyaki>
satyaki>
satyaki>SELECT table_name, column_name
2 FROM user_tab_cols
3 WHERE data_type NOT IN ('BLOB', 'LONG', 'CLOB')
4 AND dbms_xmlgen.getxmltype('select count('
5 || CASE
6 WHEN data_type NOT IN ('BLOB', 'LONG', 'CLOB') THEN
7 column_name
8 ELSE
9 '1'
10 END
11 || ') c from '||table_name).EXTRACT('//text()').getnumberval() = 0;
AND dbms_xmlgen.getxmltype('select count('
ERROR at line 4:
ORA-19202: Error occurred in XML processing
ORA-24347: Warning of a NULL column in an aggregate function
ORA-06512: at "SYS.DBMS_XMLGEN", line 288
ORA-06512: at line 1
Elapsed: 00:00:02.66
satyaki>
satyaki>Do you have any idea to resolve this issue in this context?
Regards.
Satyaki De. -
Sql query slow in new redhat enviornment
We just migrated to a new dev environment in Linux REDHAT5, and now the query is very slow, and I used the TOAD to run the query, it took like 700 msecond to finish, however from any server connection, the sql query takes hours to finish.
I checked toad monitor, it said need to increase db_buffer_cache and shared pool too small.
Also three red alert from toad is:
1. Library Cache get hit ratio: Dynamic or unsharable sql
2. Chained fetch ratio: PCT free too low for a table
3. parse to execute ratio: HIgh parse to execute ratio.
App team said it ran real quick in the old AIX system, however I ran it in old system, and monitored in the toad, it gave me all same 5 red alerts in old system, and it did provide query results a lot quicker though.
Here is the parameters in the old system (11gr1 on AIX):
SQL> show parameter target
NAME TYPE VALUE
-------------------------------- archive_lag_target integer 0
db_flashback_retention_target integer 1440
fast_start_io_target integer 0
fast_start_mttr_target integer 0
memory_max_target big integer 0
memory_target big integer 0
pga_aggregate_target big integer 278928K
sga_target big integer 0
SQL> show parameter shared
NAME TYPE VALUE
-------------------------------- hi_shared_memory_address integer 0
max_shared_servers integer
shared_memory_address integer 0
shared_pool_reserved_size big integer 31876710
shared_pool_size big integer 608M
shared_server_sessions integer
shared_servers integer 0
SQL> show parameter db_buffer
SQL> show parameter buffer
NAME TYPE VALUE
-------------------------------- buffer_pool_keep string
buffer_pool_recycle string
db_block_buffers integer 0
log_buffer integer 2048000
use_indirect_data_buffers boolean FALSE
SQL>
In new 11gr2 Linux REDHAT parameter:
NAME TYPE VALUE
----------- archive_lag_target integer 0
db_flashback_retention_target integer 1440
fast_start_io_target integer 0
fast_start_mttr_target integer 0
memory_max_target big integer 2512M
memory_target big integer 2512M
parallel_servers_target integer 192
pga_aggregate_target big integer 0
sga_target big integer 1648M
SQL> show parameter shared
NAME TYPE VALUE
----------- hi_shared_memory_address integer 0
max_shared_servers integer
shared_memory_address integer 0
shared_pool_reserved_size big integer 28M
shared_pool_size big integer 0
shared_server_sessions integer
shared_servers integer 1
SQL> show parameter buffer
NAME TYPE VALUE
----------- buffer_pool_keep string
buffer_pool_recycle string
db_block_buffers integer 0
log_buffer integer 18857984
use_indirect_data_buffers boolean FALSE
SQL>
Please help. Thanks in advance.846422 wrote:
why need ddl? we have a sql query slow.The DDL shows the physical structure of the table and physical storage characteristics. All relevant in performance tuning.
As for the SQL query being slow. It is not.
You have not provided any evidence that it is slow. And no, comparing performance with a totally different system is not a valid baseline for comparison. (most cars have 4 wheels, a gearbox and a steering wheel - but that does not mean you can compare different cars like a VW Beetle with a VW Porsche)
What is slow? What are the biggest wait states for the SQL? What does the execution plan say?
You have not defined a problem - you identified a symptom called "+query is slow+". You need to diagnose the condition by determining exactly what the SQL qeury is doing in the database. (and please, do not use TOAD and similar tools in an attempt to do this - do it properly instead) -
Can I put a SQL query into a bind variable and then use it to output report
Hi,
Can I put a SQL query into a bind variable and then use it to output report?
I want to create a report and an item "text area" (say P1_TEXT) which can let user to input a SQL query(they are all technical users and knows SQL very well). Then, I use a bind variable (that text area) to store the SQL statement. Then, I add a submit button and I want to use the following to output the report:
select * from (:P1_TEXT);
Do you think it is possible to do that? Any known limitations for APEX in this area?
Thanks a lot,
AngelaYou can, but make sure it's what you really want to do. Make sure you are VERY familiar with SQL Injection. Most people who know what it is, go out of their way to prevent SQL Injection. You're going out of your way to allow it.
You can try using &P1_TEXT. instead of bind variable syntax. Bind variables are one of the best ways to prevent SQL Injection, which is why it's not working for you.
Once again, I strongly urge you to consider the implications of your app, but this suggestion should get it working.
Tyler -
SQL query in SQL_REDO Logminor showing where clause with ROWID
SQL query in SQL_REDO Logminor showing where clause with ROWID. I dont wanted to use rowid but wanted to use actual value.
OPERATION SQL_REDO SQL_UNDO
DELETE delete from "OE"."ORDERS" insert into "OE"."ORDERS"
where "ORDER_ID" = '2413' ("ORDER_ID","ORDER_MODE",
and "ORDER_MODE" = 'direct' "CUSTOMER_ID","ORDER_STATUS",
and "CUSTOMER_ID" = '101' "ORDER_TOTAL","SALES_REP_ID",
and "ORDER_STATUS" = '5' "PROMOTION_ID")
and "ORDER_TOTAL" = '48552' values ('2413','direct','101',
and "SALES_REP_ID" = '161' '5','48552','161',NULL);
and "PROMOTION_ID" IS NULL
and ROWID = 'AAAHTCAABAAAZAPAAN';
DELETE delete from "OE"."ORDERS" insert into "OE"."ORDERS"
where "ORDER_ID" = '2430' ("ORDER_ID","ORDER_MODE",
and "ORDER_MODE" = 'direct' "CUSTOMER_ID","ORDER_STATUS",
and "CUSTOMER_ID" = '101' "ORDER_TOTAL","SALES_REP_ID",
and "ORDER_STATUS" = '8' "PROMOTION_ID")
and "ORDER_TOTAL" = '29669.9' values('2430','direct','101',
and "SALES_REP_ID" = '159' '8','29669.9','159',NULL);
and "PROMOTION_ID" IS NULL
and ROWID = 'AAAHTCAABAAAZAPAAe';
Please let me know solution/document which will convert SQL redo rowid value with actual value.
Thanks,Please enclose your output within tag so that people here can read it easily and help you. Also the reason that why you want to remove rowid?
Salman
Edited by: Salman Qureshi on Mar 20, 2013 3:53 PM -
Hi,
how to find using SCCM SQL query, application deployed on win 7 machines with SCCM 2012 server or user/technician installed manually. Please let me know.Thanks, is it not possible via any script also?
Like Torsten said, how can you tell the difference between CM12 installed applications and locally installed? Once you can answer that, then you can write report.
Garth Jones | My blogs: Enhansoft and
Old Blog site | Twitter:
@GarthMJ -
QUERY FOR CUSTOMERS FULL DEBIT AND CREDIT WITH CLOSING BALANCE
Hi Friends,
I need query for CUSTOMERS FULL DEBIT AND CREDIT WITH CLOSING BALANCE for selection criteria from date and to date.
I know the Trial Balance Report will sort out this issue... but i need routeday wise report
1. Business Partner Master Data - i created one UDF field called U_Routeday (MONDAY, TUESDAY, WEDNESDAY,THURSDAY,FRIDAY)
2. The query should be like selection criteria
- Routeday [%0]
- Posting Date [%1]
- Posting Date [%2]
CardCode
Debit
Credit
Balance
D10503
25031.50
24711.50
2962.00
D10641
5466.00
7460.00
285.00
D10642
2866.00
142.00
give any helpful query ASAP... Thanks in advanceHi,
Try this query:
Declare
@fromdate as datetime
Declare
@Todate as datetime
Declare
@Code as nvarchar(25)
set
@fromdate = ( select min(Ta.[RefDate]) from OJDT ta where
Ta.[RefDate] >= [%0])
set
@Todate = ( select max(Tb.[RefDate]) from OJDT tb where Tb.[RefDate]
<= [%1])
set
@code = (select max(Tc.[ShortName]) from JDT1 tC where Tc.[ShortName]
= [%2])
SELECT
[Name] as AcctName, [Jan]= sum([1]), [Feb]= sum([2]), [Mar]=
sum([3]), [Apr]= sum([4]), [May]= sum([5]), [June]= sum([6]),
[July]= sum([7]), [Aug]= sum([8]), [Sept]= sum([9]), [Oct]=
sum([10]), [Nov]= sum([11]), [Dec]= sum([12]), total = sum
(isnull([1],0)+ isnull([2],0) + isnull([3],0) + isnull([4],0) +
isnull([5],0) + isnull([6],0) + isnull([7],0) + isnull([8],0) +
isnull([9],0)+ isnull([10],0) + isnull([11],0) + isnull([12],0))
from
(SELECT
T0.[ShortName] as Name, sum(T0.[Debit]-T0.[Credit]) as T,
month(T2.[RefDate]) as month FROM JDT1 T0 INNER JOIN OACT T1 ON
T0.Account = T1.AcctCode INNER JOIN OJDT T2 ON T0.TransId =
T2.TransId WHERE T2.[RefDate] between @fromdate and @todate and
T0.[ShortName] = @code GROUP BY T0.[ShortName],T2.[RefDate] ) S
Pivot
(sum(T)
For Month IN ([1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12])) P
group
by [Name],[1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12]
Let me know your result.
Thanks & Regards,
Nagarajan -
Document library view: Group by a column with multiple values
I have a document library which has a managed metadata column.
I would like to create a view which groups the documents by this managed metadata column.
The managed metadata column can have multiple values.
I know that this is not possible with SharePoint's group by, since it only accepts those columns which can have only one single value.
But is this possible to accomplish by some other means, e.g. Content query web part? Or is there perhaps a 3rd party solution to this?
Is it possible to change the group by settings somehow to allow Group by to function with columns with multiple values? <- this may be far fetched...Hi Pekch,
I'm assuming you have VS2010 to build the custom web part. From there you will need to figure out the following:
Get a SPList object for the Document Library (See below for code example)
Loop through all the documents in the SPList object
If you have audience targetting enabled, then you'll need to determine if the user has access to the document by checking the "Target_x0020_Audiences" column)
As you also want to group by metadata, you'll need to populate 2 datatables (one table with a column containing unique metadata values and another table with a metadata column and other document related columns). Link these two tables via a dataset
relation.
Set the dataset as the datasource for a repeater, add in some css and javascript for the group expand/collaspe and it should be close to what you need.
This will be a time consuming task if you don't know where to start or have problems figuring out how to perform a certain operation. So you may want to determine if the functionality you want is required or just a "nice to have". Good
luck and if I have some spare time, I'll create a blog post outlining how to do all the above.
I got the below code from a sharepoint blog sometime in the past and you can use it to retrieve a list.
You can use it like this: GetListByUrl(http://servername/Shared%20Documents/Forms/AllItems.aspx)
using Microsoft.SharePoint;
public SPList GetListByUrl(string listURL)
SPList list = null;
try
using (SPSite site = new SPSite(listURL))
if (site != null)
// Strip off the site url, leaving the rest
// We'll use this to open the web
string webUrl = listURL.Substring(site.Url.Length);
// Strip off anything after /forms/
int formsPos = webUrl.IndexOf("/forms/", 0, StringComparison.InvariantCultureIgnoreCase);
if (formsPos >= 0)
webUrl = webUrl.Substring(0, webUrl.LastIndexOf('/', formsPos));
// Strip off anything after /lists/
int listPos = webUrl.IndexOf("/lists/", 0, StringComparison.InvariantCultureIgnoreCase);
if (listPos >= 0)
// Must be a custom list
// Strip off anything after /lists/
webUrl = webUrl.Substring(0, webUrl.LastIndexOf('/', listPos));
else
// No lists, must be a document library.
// Strip off the document library name
webUrl = webUrl.Substring(0, webUrl.LastIndexOf('/'));
// Get the web site
using (SPWeb web = site.OpenWeb(webUrl))
if (web != null)
// Initialize the web (avoids COM exceptions)
string title = web.Title;
// Strip off the relative list Url
// Form the full path to the list
//string relativelistURL = listURL.Substring(web.Url.Length);
//string url = SPUrlUtility.CombineUrl(web.Url, relativelistURL);
// Get the list
list = web.GetList(listURL);
catch { }
return list; -
Pro*C & SQLDA with NULL value for predicate column
Hi: I am using a C program to update a table via a dynamic sql (method 4) and SQLDA. In the update statement predicate, I have place holders (as in TBLCOL=:C000). One of the columns in the predicate contains null values, so I set L[n] = 0, V[n] = pData (which pData[0] = '\0'), *(I[n]) = -1, and T[n] = 5 (for text). I cannot find the row that I know is there.
I cannot change my statement to contain TBLCOL IS NULL, since I don't know ahead of time if I'm looking for rows with null values for this column. The Pro*C manual says that by setting the appropriate *(I[n]) = -1, it indicates to Oracle to simulate the "IS NULL" clause, and update the appropriate rows. In my case, I receive 1403 as SQLCODE when I use TBLCOL=:C000 vs TBLCOL IS NULL. What am I doing wrong? Thank you for your help.You should include these columns as well;
ChangeType (see mxi_changetype)
ValOwner (repository)
UserID ("jobid=<>", usermskey, GUI (mmc), DG (dyngrp), reconcile)
IdAudit (This is the event task (add and del member for assignments)
ParentAuditId (AuditID of parent which last updated the attribute, not consistent)
ChangedBy (Holds the MSKEY of the user which last changed the attribute)
ExpiryTime
to make sure you get a fuller picture of the audit record.
Your selection does not cover all events and descriptions
br,
Chris -
How to validate if a column have NULL value, dont show a row with MDX
Hello,
I have this situation, I have a Result from MDX that return rows with values NULL on columns, I tried with NON EMPTY and NONEMPTY but the result is the same. That I want to do is validate if a column have a Null value discard the row, but I dont know how
to implement it, could somebody help me?, please.
Thanks a lot.
Sukey Nakasima
Sukey NakasimaHello,
I found the answer in this link https://social.technet.microsoft.com/Forums/sqlserver/en-US/f9c02ce3-96b2-4cd6-921f-3679eb22d790/dont-want-to-cross-join-with-null-values-in-mdx?forum=sqlanalysisservices
Thanks a lot.
Sukey Nakasima
Sukey Nakasima -
Update column with ROW_NUMBER() value
I have a column that I would like to update with a ROW_NUMBER() value partitioned by a specific column.
CREATE TABLE ##tmp (id INT NULL,
value1 VARCHAR(10),
value2 VARCHAR(10))
INSERT INTO ##tmp (value1,
value2)
VALUES ('A', 'asdfasdf'),
('A', 'asdf'),
('A', 'VC'),
('B', 'aasdf'),
('C', 'sdfgs'),
('C', 'xdfbhsdty'),
('C', '23sdgfg'),
('C', '234')
-- update the ID column with the values below
SELECT ROW_NUMBER() OVER (PARTITION BY Value1 ORDER BY Value2),
Value1,
Value2
FROM ##tmp
I have 14 million records. Can someone explain the best way to do this? Does the ORDER BY destroy performance for this many records?
Thanks!CREATE TABLE ##tmp (id INT NULL,
value1 VARCHAR(10),
value2 VARCHAR(10))
INSERT INTO ##tmp (value1,
value2)
VALUES ('A', 'asdfasdf'),
('A', 'asdf'),
('A', 'VC'),
('B', 'aasdf'),
('C', 'sdfgs'),
('C', 'xdfbhsdty'),
('C', '23sdgfg'),
('C', '234')
-- update the ID column with the values below
; with cte as (
SELECT ROW_NUMBER() OVER (PARTITION BY Value1 ORDER BY Value2) as RowNumber,
Value1,
Value2
FROM ##tmp
update cte
set Id = RowNumber;
Russel Loski, MCT, MCSE Data Platform/Business Intelligence. Twitter: @sqlmovers; blog: www.sqlmovers.com -
SQL query slow with call to function
I have a SQL query that will return in less than a second or two with a function in-line selected in the "from" clause of the statement. As soon as I select that returned value in the SQL statement, the statement takes from anywhere from 2 to 5 minutes to return. Here is a simplified sample from the statement:
This statement returns in a second or 2.
select A.pk_id
from stu_schedule A, stu_school B, stu_year C, school_year D,
(select calc_ytd_class_abs2(Z.PK_ID,'U') ytd_unx
from stu_schedule Z) II
where B.pk_id = A.fk_stu_school
and C.pk_id = B.fk_stu_year
and D.pk_id = C.year
and D.school_year = '2011';
if I add this function call in, the statement runs extremely poor.
select A.pk_id,
II.ytd_unx
from stu_schedule A, stu_school B, stu_year C, school_year D,
(select calc_ytd_class_abs2(Z.PK_ID,'U') ytd_unx
from stu_schedule Z) II
where B.pk_id = A.fk_stu_school
and C.pk_id = B.fk_stu_year
and D.pk_id = C.year
and D.school_year = '2011';
Here is the function that is called:
create or replace FUNCTION calc_ytd_class_abs2 (p_fk_stu_schedule in varchar2,
p_legality in varchar2) return number IS
l_days_absent number := 0;
CURSOR get_class_abs IS
select (select nvl(max(D.days_absent),'0')
from cut_code D
where D.pk_id = C.fk_cut_code
and (D.legality = p_legality
or p_legality = '%')) days_absent
from stu_schedule_detail B, stu_class_attendance C
where B.fk_stu_schedule = p_fk_stu_schedule
and C.fk_stu_schedule_detail = B.pk_id;
BEGIN
FOR x in get_class_abs LOOP
l_days_absent := l_days_absent + x.days_absent;
END LOOP;
return (l_days_absent);
END calc_ytd_class_abs2;Query returns anywhere from 6000 to 32000 rows. For each of those rows a parameter is passed in to 4 different functions to get ytd totals. When I call the functions in the in-line view but do not select from them in the main SQL, the report (oh, this is Application Express 4.0 interactive reports, just an FYI) runs fast. The report comes back in a few seconds. But when I select from the in-line view to display those ytd totals, the report runs extremely slow. I know there are the articles about context switching and how mixing SQL with PL/SQL performs poorly. So I tried a pipeline table function where the function for the ytd totals populate the columns of the pipeline table and I select from the pipeline table in the SQL query in the interactive report. That seemed to perform a little worse from what I can tell.
Thanks for any help you can offer.
Maybe you are looking for
-
Validation \ User-exit \ BADI \ BTE for F-43 for missing exchange rate
Hi, I maintain the exchange rate on a daily basis in TCURR exchange rate table. But, let's say that, by accident, in one day I fail to input the exchange rate. In this case, when I input a invoice in F-43, I'd like for the system to issue a warning (
-
Query on FCC for Sender File adapter
Hi All, Our sender file is like below: 0191011 2005100007098240081117812600811178126 3 0 10001011110112000004389 EUR C000000000224397 EUR9DE 294000945681
-
Adobe apps funky in 10.4.6
I recently did an erase and reinstall to work around the bug in which the QT 7.0.4 update invalidates the QT Pro key... Anyway, so now I have 10.4.6 on my machine, my system and all the apps are only a few days old. I use Photoshop and Illustrator a
-
ReturnEvent not get called after search more than 10 times in LOV page
hi there, I using ADF and Jdeveloper 10.3.1.2 In my project I have a form which called a LOV page for selecting multiple records and submit button sumup the values of each selected row of LOV and return to the main form and display added value on tex
-
Using the built-in camera as a webcam
Hello all. I would like a way to keep an eye on my puppy while I am at work. Ideally, I would be able to open my mac and point the camera towards his cage and be able to see that through a website, like a webcam. Is this possible? Thanks for the help