Where to perform SOLAR_PROJECT_ADMIN
Hi Friends,
i have some doubts to clarify:
1. Do we require seperate client to be created, i am talking about the client where i would use Tcode SOLAR_PROJECT_ADMIN
2. Do i require a seperate client for Maintenance Optimizer,
3. can i configure Maintenance Optimizer in the same Client where i would create my project by Tcode SOLAR_PROJECT_ADMIN.
Your assistance would be highly appreciated.
Regards
Ayush Johri
Hello Ayush,
You can go ahead and use the same single client for all your configuration. You need two clients one for development and one production only if you have more customization.
Regards
Farzan
Similar Messages
-
SQL Query with a little bit more complicated WHERE clause performance issue
Hello, I have some performance issue in this case:
Very simplified query:
SELECT COUNT(*) FROM Items
WHERE
ConditionA OR
ConditionB OR
ConditionC OR ...
Simply I have to determine how many Items the user has access through some complicated conditions.
When there is a large number of records (100,000+) in the Items table and say ~10 complicated conditions concatenated in WHERE clause, I get the result about 2 seconds in my case. The problem is when very few conditions are met, f.e. when I get only
10 Items from 100,000.
How can I improve the performace in this "Get my items" case?
Additional information:
the query is generated by EF 6.1
MS SQL 2012 Express
Here is the main part of the real SQL Execution Plan:Can you post table/index DDL? Query?
Sample query:
exec sp_executesql N'SELECT
[GroupBy1].[A1] AS [C1]
FROM ( SELECT
COUNT(1) AS [A1]
FROM [dbo].[Tickets] AS [Extent1]
LEFT OUTER JOIN [dbo].[Services] AS [Extent2] ON [Extent1].[ServiceId] = [Extent2].[Id]
WHERE (@p__linq__0 = 1) OR ([Extent1].[SubmitterKey] = @p__linq__1) OR ([Extent1].[OperatorKey] = @p__linq__2) OR (([Extent1].[OperatorKey] IS NULL) AND (@p__linq__2 IS NULL)) OR ([Extent1].[SolverKey] = @p__linq__3) OR (([Extent1].[SolverKey] IS NULL) AND (@p__linq__3 IS NULL)) OR ([Extent1].[Incident2ndLineSupportKey] = @p__linq__4) OR (([Extent1].[Incident2ndLineSupportKey] IS NULL) AND (@p__linq__4 IS NULL)) OR ((@p__linq__5 = 1) AND ((1 = CAST( [Extent1].[TicketType] AS int)) OR ((@p__linq__6 = 1) AND (((2 = CAST( [Extent1].[TicketType] AS int)) AND (([Extent2].[IncidentManager] = @p__linq__7) OR (([Extent2].[IncidentManager] IS NULL) AND (@p__linq__7 IS NULL)))) OR ((3 = CAST( [Extent1].[TicketType] AS int)) AND (([Extent2].[ServiceRequestManager] = @p__linq__8) OR (([Extent2].[ServiceRequestManager] IS NULL) AND (@p__linq__8 IS NULL)))) OR ((4 = CAST( [Extent1].[TicketType] AS int)) AND (([Extent2].[ProblemManager] = @p__linq__9) OR (([Extent2].[ProblemManager] IS NULL) AND (@p__linq__9 IS NULL)))) OR ((5 = CAST( [Extent1].[TicketType] AS int)) AND (([Extent2].[ChangeManager] = @p__linq__10) OR (([Extent2].[ChangeManager] IS NULL) AND (@p__linq__10 IS NULL)))))) OR ( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[ServiceDeputyManagers] AS [Extent3]
WHERE ([Extent1].[ServiceId] = [Extent3].[ServiceId]) AND ( CAST( [Extent3].[TicketType] AS int) = CAST( [Extent1].[TicketType] AS int)) AND ([Extent3].[UserProviderKey] = @p__linq__11)
)))) OR ((2 = CAST( [Extent1].[TicketType] AS int)) AND (([Extent2].[AllowAccessToOtherOperatorsIncidents] = 1) OR ((201 = [Extent1].[TicketStateValue]) AND ([Extent2].[WfDisableIncidentTakeFromQueueAction] <> cast(1 as bit)))) AND ([Extent2].[Incident1stLineSupportLimitedAccess] <> cast(1 as bit))) OR ((3 = CAST( [Extent1].[TicketType] AS int)) AND (([Extent2].[AllowAccessToOtherOperatorsServiceRequests] = 1) OR ((301 = [Extent1].[TicketStateValue]) AND ([Extent2].[WfDisableServiceRequestTakeFromQueueAction] <> cast(1 as bit)))) AND ([Extent2].[ServiceRequestLimitedAccess] <> cast(1 as bit))) OR ((4 = CAST( [Extent1].[TicketType] AS int)) AND ([Extent2].[AllowAccessToOtherOperatorsProblems] = 1) AND ([Extent2].[ProblemLimitedAccess] <> cast(1 as bit))) OR ((5 = CAST( [Extent1].[TicketType] AS int)) AND (([Extent2].[AllowAccessToOtherOperatorsChanges] = 1) OR ((501 = [Extent1].[TicketStateValue]) AND ([Extent2].[WfDisableChangeTakeFromQueueAction] <> cast(1 as bit)))) AND ([Extent2].[ChangeLimitedAccess] <> cast(1 as bit))) OR ((2 = CAST( [Extent1].[TicketType] AS int)) AND (([Extent2].[AllowAccessToOtherOperatorsIncidents] = 1) OR ((201 = [Extent1].[TicketStateValue]) AND ([Extent2].[WfDisableIncidentTakeFromQueueAction] <> cast(1 as bit)))) AND (( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[ServiceOperators] AS [Extent4]
WHERE ([Extent1].[ServiceId] = [Extent4].[ServiceId]) AND (2 = CAST( [Extent4].[TicketType] AS int)) AND ([Extent4].[UserProviderKey] = @p__linq__12) AND (1 = [Extent4].[SupportLine])
)) OR ( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[OperatorGroupUsers] AS [Extent5]
WHERE ([Extent5].[UserProviderKey] = @p__linq__13) AND ( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[ServiceOperatorGroups] AS [Extent6]
WHERE ([Extent6].[ServiceId] = [Extent1].[ServiceId]) AND (2 = CAST( [Extent6].[TicketType] AS int)) AND (1 = [Extent6].[SupportLine]) AND ([Extent6].[OperatorGroupId] = [Extent5].[OperatorGroupId])
)))) OR ((2 = CAST( [Extent1].[TicketType] AS int)) AND ([Extent1].[IncidentFunctionEscalatedTo2ndLineSupport] = 1) AND ([Extent1].[Incident2ndLineSupportKey] IS NULL) AND (([Extent2].[Incident2ndLineSupportLimitedAccess] <> cast(1 as bit)) OR ( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[ServiceOperators] AS [Extent7]
WHERE ([Extent1].[ServiceId] = [Extent7].[ServiceId]) AND (2 = CAST( [Extent7].[TicketType] AS int)) AND ([Extent7].[UserProviderKey] = @p__linq__14) AND (2 = [Extent7].[SupportLine])
)) OR ( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[OperatorGroupUsers] AS [Extent8]
WHERE ([Extent8].[UserProviderKey] = @p__linq__15) AND ( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[ServiceOperatorGroups] AS [Extent9]
WHERE ([Extent9].[ServiceId] = [Extent1].[ServiceId]) AND (2 = CAST( [Extent9].[TicketType] AS int)) AND (2 = [Extent9].[SupportLine]) AND ([Extent9].[OperatorGroupId] = [Extent8].[OperatorGroupId])
)))) OR ((3 = CAST( [Extent1].[TicketType] AS int)) AND (([Extent2].[AllowAccessToOtherOperatorsServiceRequests] = 1) OR ((301 = CAST( [Extent1].[TicketState] AS int)) AND ([Extent2].[WfDisableServiceRequestTakeFromQueueAction] <> cast(1 as bit)))) AND (( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[ServiceOperators] AS [Extent10]
WHERE ([Extent1].[ServiceId] = [Extent10].[ServiceId]) AND (3 = CAST( [Extent10].[TicketType] AS int)) AND ([Extent10].[UserProviderKey] = @p__linq__16)
)) OR ( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[OperatorGroupUsers] AS [Extent11]
WHERE ([Extent11].[UserProviderKey] = @p__linq__17) AND ( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[ServiceOperatorGroups] AS [Extent12]
WHERE ([Extent12].[ServiceId] = [Extent1].[ServiceId]) AND (3 = CAST( [Extent12].[TicketType] AS int)) AND ([Extent12].[OperatorGroupId] = [Extent11].[OperatorGroupId])
)))) OR ((4 = CAST( [Extent1].[TicketType] AS int)) AND ([Extent2].[AllowAccessToOtherOperatorsProblems] = 1) AND (( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[ServiceOperators] AS [Extent13]
WHERE ([Extent1].[ServiceId] = [Extent13].[ServiceId]) AND (4 = CAST( [Extent13].[TicketType] AS int)) AND ([Extent13].[UserProviderKey] = @p__linq__18)
)) OR ( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[OperatorGroupUsers] AS [Extent14]
WHERE ([Extent14].[UserProviderKey] = @p__linq__19) AND ( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[ServiceOperatorGroups] AS [Extent15]
WHERE ([Extent15].[ServiceId] = [Extent1].[ServiceId]) AND (4 = CAST( [Extent15].[TicketType] AS int)) AND ([Extent15].[OperatorGroupId] = [Extent14].[OperatorGroupId])
)))) OR ((5 = CAST( [Extent1].[TicketType] AS int)) AND (([Extent2].[AllowAccessToOtherOperatorsChanges] = 1) OR ((501 = [Extent1].[TicketStateValue]) AND ([Extent2].[WfDisableChangeTakeFromQueueAction] <> cast(1 as bit)))) AND (( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[ServiceOperators] AS [Extent16]
WHERE ([Extent1].[ServiceId] = [Extent16].[ServiceId]) AND (5 = CAST( [Extent16].[TicketType] AS int)) AND ([Extent16].[UserProviderKey] = @p__linq__20)
)) OR ( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[OperatorGroupUsers] AS [Extent17]
WHERE ([Extent17].[UserProviderKey] = @p__linq__21) AND ( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[ServiceOperatorGroups] AS [Extent18]
WHERE ([Extent18].[ServiceId] = [Extent1].[ServiceId]) AND (5 = CAST( [Extent18].[TicketType] AS int)) AND ([Extent18].[OperatorGroupId] = [Extent17].[OperatorGroupId])
)))) OR ( EXISTS (SELECT
1 AS [C1]
FROM [dbo].[TicketInvitations] AS [Extent19]
WHERE ([Extent19].[TicketId] = [Extent1].[Id]) AND (([Extent19].[InvitedUserProviderKey] = @p__linq__22) OR (([Extent19].[InvitedUserProviderKey] IS NULL) AND (@p__linq__22 IS NULL)))
)) OR ( EXISTS (SELECT
1 AS [C1]
FROM (SELECT
[Extent20].[CustomerId] AS [CustomerId]
FROM [dbo].[CustomerUsers] AS [Extent20]
WHERE ([Extent20].[UserProviderKey] = @p__linq__23) AND ([Extent20].[CanAccessOthersTickets] = 1)
INTERSECT
SELECT
[Extent21].[CustomerId] AS [CustomerId]
FROM [dbo].[CustomerUsers] AS [Extent21]
WHERE [Extent21].[UserProviderKey] = [Extent1].[SubmitterKey]) AS [Intersect1]
)) OR ( EXISTS (SELECT
1 AS [C1]
FROM (SELECT
[Extent22].[InternalGroupId] AS [InternalGroupId]
FROM [dbo].[InternalGroupUsers] AS [Extent22]
WHERE ([Extent22].[UserProviderKey] = @p__linq__24) AND ([Extent22].[CanAccessOthersTickets] = 1)
INTERSECT
SELECT
[Extent23].[InternalGroupId] AS [InternalGroupId]
FROM [dbo].[InternalGroupUsers] AS [Extent23]
WHERE [Extent23].[UserProviderKey] = [Extent1].[SubmitterKey]) AS [Intersect2]
) AS [GroupBy1]',N'@p__linq__0 bit,@p__linq__1 varchar(8000),@p__linq__2 varchar(8000),@p__linq__3 varchar(8000),@p__linq__4 varchar(8000),@p__linq__5 bit,@p__linq__6 bit,@p__linq__7 varchar(8000),@p__linq__8 varchar(8000),@p__linq__9 varchar(8000),@p__linq__10 varchar(8000),@p__linq__11 varchar(8000),@p__linq__12 varchar(8000),@p__linq__13 varchar(8000),@p__linq__14 varchar(8000),@p__linq__15 varchar(8000),@p__linq__16 varchar(8000),@p__linq__17 varchar(8000),@p__linq__18 varchar(8000),@p__linq__19 varchar(8000),@p__linq__20 varchar(8000),@p__linq__21 varchar(8000),@p__linq__22 varchar(8000),@p__linq__23 varchar(8000),@p__linq__24 varchar(8000)',@p__linq__0=0,@p__linq__1='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__2='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__3='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__4='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__5=1,@p__linq__6=0,@p__linq__7='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__8='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__9='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__10='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__11='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__12='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__13='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__14='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__15='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__16='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__17='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__18='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__19='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__20='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__21='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__22='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__23='31555851-b89d-4a15-bb05-5a6fd42f9552',@p__linq__24='31555851-b89d-4a15-bb05-5a6fd42f9552'
Generated DDL for related tables: (indexes are primary on PKs and FKs)
CREATE TABLE [dbo].[CustomerUsers](
[UserProviderKey] [varchar](184) NOT NULL,
[CustomerId] [int] NOT NULL,
[CanAccessOthersTickets] [bit] NOT NULL,
CONSTRAINT [PK_dbo.CustomerUsers] PRIMARY KEY CLUSTERED
[UserProviderKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
/****** Object: Table [dbo].[InternalGroupUsers] Script Date: 7.5.2014 8:39:38 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[InternalGroupUsers](
[UserProviderKey] [varchar](184) NOT NULL,
[InternalGroupId] [int] NOT NULL,
[CanAccessOthersTickets] [bit] NOT NULL,
CONSTRAINT [PK_dbo.InternalGroupUsers] PRIMARY KEY CLUSTERED
[UserProviderKey] ASC,
[InternalGroupId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
/****** Object: Table [dbo].[OperatorGroupUsers] Script Date: 7.5.2014 8:39:38 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[OperatorGroupUsers](
[UserProviderKey] [varchar](184) NOT NULL,
[OperatorGroupId] [int] NOT NULL,
CONSTRAINT [PK_dbo.OperatorGroupUsers] PRIMARY KEY CLUSTERED
[UserProviderKey] ASC,
[OperatorGroupId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
/****** Object: Table [dbo].[ServiceDeputyManagers] Script Date: 7.5.2014 8:39:38 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[ServiceDeputyManagers](
[UserProviderKey] [varchar](184) NOT NULL,
[ServiceId] [int] NOT NULL,
[TicketType] [int] NOT NULL,
CONSTRAINT [PK_dbo.ServiceDeputyManagers] PRIMARY KEY CLUSTERED
[UserProviderKey] ASC,
[ServiceId] ASC,
[TicketType] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
/****** Object: Table [dbo].[ServiceOperatorGroups] Script Date: 7.5.2014 8:39:38 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[ServiceOperatorGroups](
[ServiceId] [int] NOT NULL,
[OperatorGroupId] [int] NOT NULL,
[TicketTypeValue] [int] NOT NULL,
[SupportLine] [int] NOT NULL,
[TicketType] [int] NOT NULL,
CONSTRAINT [PK_dbo.ServiceOperatorGroups] PRIMARY KEY CLUSTERED
[ServiceId] ASC,
[OperatorGroupId] ASC,
[TicketTypeValue] ASC,
[SupportLine] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
/****** Object: Table [dbo].[ServiceOperators] Script Date: 7.5.2014 8:39:38 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[ServiceOperators](
[UserProviderKey] [varchar](184) NOT NULL,
[ServiceId] [int] NOT NULL,
[TicketTypeValue] [int] NOT NULL,
[SupportLine] [int] NOT NULL,
[TicketType] [int] NOT NULL,
CONSTRAINT [PK_dbo.ServiceOperators] PRIMARY KEY CLUSTERED
[UserProviderKey] ASC,
[ServiceId] ASC,
[TicketTypeValue] ASC,
[SupportLine] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
/****** Object: Table [dbo].[Services] Script Date: 7.5.2014 8:39:38 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[Services](
[Id] [int] IDENTITY(1,1) NOT NULL,
[ParentId] [int] NULL,
[Name] [nvarchar](256) NOT NULL,
[Description] [nvarchar](max) NULL,
[Disabled] [bit] NOT NULL,
[NewTicketLimitedAccess] [bit] NOT NULL,
[Incident1stLineSupportLimitedAccess] [bit] NOT NULL,
[Incident2ndLineSupportLimitedAccess] [bit] NOT NULL,
[ServiceRequestLimitedAccess] [bit] NOT NULL,
[ProblemLimitedAccess] [bit] NOT NULL,
[ServiceRequestManager] [varchar](184) NOT NULL,
[IncidentManager] [varchar](184) NOT NULL,
[ProblemManager] [varchar](184) NOT NULL,
[Deleted] [bit] NOT NULL,
[WfDisableIncidentAssignedState] [bit] NOT NULL,
[WfDisableIncidentConfirmedState] [bit] NOT NULL,
[WfDisableIncidentTakeFromQueueAction] [bit] NOT NULL,
[WfDisableIncidentFinishSolutionAction] [bit] NOT NULL,
[WfDisableServiceRequestAssignedState] [bit] NOT NULL,
[WfDisableServiceRequestConfirmedState] [bit] NOT NULL,
[WfDisableServiceRequestTakeFromQueueAction] [bit] NOT NULL,
[WfDisableServiceRequestFinishSolutionAction] [bit] NOT NULL,
[WfDisableServiceRequestPostponeAction] [bit] NOT NULL,
[ChangeLimitedAccess] [bit] NOT NULL,
[ChangeManager] [varchar](184) NOT NULL,
[WfDisableChangeTakeFromQueueAction] [bit] NOT NULL,
[WfDisableChangeAssignedState] [bit] NOT NULL,
[WfDisableChangeStartPreparationAction] [bit] NOT NULL,
[IsDepartment] [bit] NOT NULL,
[InheritsFromDepartment] [bit] NOT NULL,
[AllowSelectSolverBySubmitterForIncidents] [bit] NOT NULL,
[AllowSelectSolverBySubmitterForServiceRequests] [bit] NOT NULL,
[AllowSelectSolverBySubmitterForProblems] [bit] NOT NULL,
[AllowSelectSolverBySubmitterForChanges] [bit] NOT NULL,
[AllowAccessToOtherOperatorsIncidents] [bit] NOT NULL,
[AllowAccessToOtherOperatorsServiceRequests] [bit] NOT NULL,
[AllowAccessToOtherOperatorsProblems] [bit] NOT NULL,
[AllowAccessToOtherOperatorsChanges] [bit] NOT NULL,
[AllowChangeDeadlineForIncidents] [bit] NOT NULL,
[AllowChangeDeadlineForServiceRequests] [bit] NOT NULL,
[AllowChangeDeadlineForProblems] [bit] NOT NULL,
[AllowChangeDeadlineForChanges] [bit] NOT NULL,
[AllowSelectPriorityForServiceRequests] [bit] NOT NULL,
[WfDisableIncidentCompletedState] [bit] NOT NULL,
[WfDoIncidentCompleteActionBySubmittersMessage] [bit] NOT NULL,
[WfDisableServiceRequestCompletedState] [bit] NOT NULL,
[WfDoServiceRequestCompleteActionBySubmittersMessage] [bit] NOT NULL,
CONSTRAINT [PK_dbo.Services] PRIMARY KEY CLUSTERED
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
/****** Object: Table [dbo].[TicketInvitations] Script Date: 7.5.2014 8:39:38 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[TicketInvitations](
[Id] [int] IDENTITY(1,1) NOT NULL,
[TicketId] [int] NOT NULL,
[InitiatorUserProviderKey] [varchar](184) NULL,
[InitiatorFullName] [nvarchar](max) NULL,
[InvitedUserProviderKey] [varchar](184) NULL,
[InvitedFullName] [nvarchar](max) NULL,
[Type] [int] NOT NULL,
[CreatedUTC] [datetime] NOT NULL,
CONSTRAINT [PK_dbo.TicketInvitations] PRIMARY KEY CLUSTERED
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
/****** Object: Table [dbo].[Tickets] Script Date: 7.5.2014 8:39:38 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[Tickets](
[Id] [int] IDENTITY(1,1) NOT NULL,
[ParentId] [int] NULL,
[ServiceId] [int] NULL,
[ServiceMailboxId] [int] NULL,
[TicketTypeValue] [int] NOT NULL,
[TicketTypeIdREF] [int] NOT NULL,
[SubmitterKey] [varchar](184) NOT NULL,
[SubmitterFullName] [nvarchar](256) NULL,
[CustomerId] [int] NULL,
[SolverKey] [varchar](184) NULL,
[SolverFullName] [nvarchar](256) NULL,
[Subject] [nvarchar](max) NULL,
[CreatedUTC] [datetime] NOT NULL,
[Archived] [bit] NOT NULL,
[MarkedAsSolvedUTC] [datetime] NULL,
[ArchivedUTC] [datetime] NULL,
[TicketSourceValue] [int] NOT NULL,
[OperatorKey] [varchar](184) NULL,
[DeadlineUTC] [datetime] NULL,
[DeadlineLastNotificatedPercentage] [int] NULL,
[UrgencyValue] [int] NULL,
[ImpactValue] [int] NULL,
[PriorityValue] [int] NULL,
[TicketStateValue] [int] NOT NULL,
[IncidentFunctionEscalatedTo2ndLineSupport] [bit] NOT NULL,
[Incident2ndLineSupportKey] [varchar](184) NULL,
[Incident2ndLineSupportFullName] [nvarchar](max) NULL,
[TicketType] [int] NOT NULL,
[Source] [int] NOT NULL,
[TicketState] [int] NOT NULL,
[Urgency] [int] NULL,
[Impact] [int] NULL,
[TicketSummaryState] [int] NOT NULL,
[ResolutionText] [nvarchar](max) NULL,
[ResolutionModifiedUTC] [datetime] NULL,
[ResolutionEdited] [bit] NOT NULL,
[ResolutionUserProviderKey] [varchar](184) NULL,
[ResolutionFullName] [nvarchar](max) NULL,
[TicketSubType] [int] NULL,
[ChangeRiskProbabilityValue] [int] NULL,
[ChangeImpactValue] [int] NULL,
[ChangeRiskCategoryValue] [int] NULL,
[RfcText] [nvarchar](max) NULL,
[RfcModifiedUTC] [datetime] NULL,
[RfcEdited] [bit] NOT NULL,
[RfcUserProviderKey] [varchar](184) NULL,
[RfcFullName] [nvarchar](max) NULL,
[ManualDeadline] [bit] NOT NULL,
[ContactInformation] [nvarchar](256) NULL,
[Imported] [bit] NOT NULL,
[ForceClosed] [bit] NOT NULL,
CONSTRAINT [PK_dbo.Tickets] PRIMARY KEY CLUSTERED
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
/****** Object: Index [IX_CustomerId] Script Date: 7.5.2014 8:39:38 ******/
CREATE NONCLUSTERED INDEX [IX_CustomerId] ON [dbo].[CustomerUsers]
[CustomerId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [IX_InternalGroupId] Script Date: 7.5.2014 8:39:38 ******/
CREATE NONCLUSTERED INDEX [IX_InternalGroupId] ON [dbo].[InternalGroupUsers]
[InternalGroupId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [IX_OperatorGroupId] Script Date: 7.5.2014 8:39:38 ******/
CREATE NONCLUSTERED INDEX [IX_OperatorGroupId] ON [dbo].[OperatorGroupUsers]
[OperatorGroupId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [IX_ServiceId] Script Date: 7.5.2014 8:39:38 ******/
CREATE NONCLUSTERED INDEX [IX_ServiceId] ON [dbo].[ServiceDeputyManagers]
[ServiceId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [IX_OperatorGroupId] Script Date: 7.5.2014 8:39:38 ******/
CREATE NONCLUSTERED INDEX [IX_OperatorGroupId] ON [dbo].[ServiceOperatorGroups]
[OperatorGroupId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [IX_ServiceId] Script Date: 7.5.2014 8:39:38 ******/
CREATE NONCLUSTERED INDEX [IX_ServiceId] ON [dbo].[ServiceOperatorGroups]
[ServiceId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [IX_ServiceId] Script Date: 7.5.2014 8:39:38 ******/
CREATE NONCLUSTERED INDEX [IX_ServiceId] ON [dbo].[ServiceOperators]
[ServiceId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [IX_ParentId] Script Date: 7.5.2014 8:39:38 ******/
CREATE NONCLUSTERED INDEX [IX_ParentId] ON [dbo].[Services]
[ParentId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [IX_TicketId] Script Date: 7.5.2014 8:39:38 ******/
CREATE NONCLUSTERED INDEX [IX_TicketId] ON [dbo].[TicketInvitations]
[TicketId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
SET ANSI_PADDING ON
GO
/****** Object: Index [IX_TicketInvitations_InvitedUserProviderKey_TicketId] Script Date: 7.5.2014 8:39:38 ******/
CREATE UNIQUE NONCLUSTERED INDEX [IX_TicketInvitations_InvitedUserProviderKey_TicketId] ON [dbo].[TicketInvitations]
[InvitedUserProviderKey] ASC,
[TicketId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [IX_CustomerId] Script Date: 7.5.2014 8:39:38 ******/
CREATE NONCLUSTERED INDEX [IX_CustomerId] ON [dbo].[Tickets]
[CustomerId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [IX_ParentId] Script Date: 7.5.2014 8:39:38 ******/
CREATE NONCLUSTERED INDEX [IX_ParentId] ON [dbo].[Tickets]
[ParentId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [IX_ServiceId] Script Date: 7.5.2014 8:39:38 ******/
CREATE NONCLUSTERED INDEX [IX_ServiceId] ON [dbo].[Tickets]
[ServiceId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [IX_ServiceMailboxId] Script Date: 7.5.2014 8:39:38 ******/
CREATE NONCLUSTERED INDEX [IX_ServiceMailboxId] ON [dbo].[Tickets]
[ServiceMailboxId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
SET ANSI_PADDING ON
GO
/****** Object: Index [IX_SolverFullName] Script Date: 7.5.2014 8:39:38 ******/
CREATE NONCLUSTERED INDEX [IX_SolverFullName] ON [dbo].[Tickets]
[SolverFullName] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
SET ANSI_PADDING ON
GO
/****** Object: Index [IX_SolverKey] Script Date: 7.5.2014 8:39:38 ******/
CREATE NONCLUSTERED INDEX [IX_SolverKey] ON [dbo].[Tickets]
[SolverKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
SET ANSI_PADDING ON
GO
/****** Object: Index [IX_SubmitterFullName] Script Date: 7.5.2014 8:39:38 ******/
CREATE NONCLUSTERED INDEX [IX_SubmitterFullName] ON [dbo].[Tickets]
[SubmitterFullName] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
SET ANSI_PADDING ON
GO
/****** Object: Index [IX_SubmitterKey] Script Date: 7.5.2014 8:39:38 ******/
CREATE NONCLUSTERED INDEX [IX_SubmitterKey] ON [dbo].[Tickets]
[SubmitterKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [IX_Tickets_TicketType_TicketTypeIdREF] Script Date: 7.5.2014 8:39:38 ******/
CREATE UNIQUE NONCLUSTERED INDEX [IX_Tickets_TicketType_TicketTypeIdREF] ON [dbo].[Tickets]
[TicketType] ASC,
[TicketTypeIdREF] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
ALTER TABLE [dbo].[CustomerUsers] WITH CHECK ADD CONSTRAINT [FK_dbo.CustomerUsers_dbo.Customers_CustomerId] FOREIGN KEY([CustomerId])
REFERENCES [dbo].[Customers] ([Id])
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[CustomerUsers] CHECK CONSTRAINT [FK_dbo.CustomerUsers_dbo.Customers_CustomerId]
GO
ALTER TABLE [dbo].[InternalGroupUsers] WITH CHECK ADD CONSTRAINT [FK_dbo.InternalGroupUsers_dbo.InternalGroups_InternalGroupId] FOREIGN KEY([InternalGroupId])
REFERENCES [dbo].[InternalGroups] ([Id])
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[InternalGroupUsers] CHECK CONSTRAINT [FK_dbo.InternalGroupUsers_dbo.InternalGroups_InternalGroupId]
GO
ALTER TABLE [dbo].[OperatorGroupUsers] WITH CHECK ADD CONSTRAINT [FK_dbo.OperatorGroupUsers_dbo.OperatorGroups_OperatorGroupId] FOREIGN KEY([OperatorGroupId])
REFERENCES [dbo].[OperatorGroups] ([Id])
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[OperatorGroupUsers] CHECK CONSTRAINT [FK_dbo.OperatorGroupUsers_dbo.OperatorGroups_OperatorGroupId]
GO
ALTER TABLE [dbo].[ServiceDeputyManagers] WITH CHECK ADD CONSTRAINT [FK_dbo.ServiceDeputyManagers_dbo.Services_ServiceId] FOREIGN KEY([ServiceId])
REFERENCES [dbo].[Services] ([Id])
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[ServiceDeputyManagers] CHECK CONSTRAINT [FK_dbo.ServiceDeputyManagers_dbo.Services_ServiceId]
GO
ALTER TABLE [dbo].[ServiceOperatorGroups] WITH CHECK ADD CONSTRAINT [FK_dbo.ServiceOperatorGroups_dbo.OperatorGroups_OperatorGroupId] FOREIGN KEY([OperatorGroupId])
REFERENCES [dbo].[OperatorGroups] ([Id])
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[ServiceOperatorGroups] CHECK CONSTRAINT [FK_dbo.ServiceOperatorGroups_dbo.OperatorGroups_OperatorGroupId]
GO
ALTER TABLE [dbo].[ServiceOperatorGroups] WITH CHECK ADD CONSTRAINT [FK_dbo.ServiceOperatorGroups_dbo.Services_ServiceId] FOREIGN KEY([ServiceId])
REFERENCES [dbo].[Services] ([Id])
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[ServiceOperatorGroups] CHECK CONSTRAINT [FK_dbo.ServiceOperatorGroups_dbo.Services_ServiceId]
GO
ALTER TABLE [dbo].[ServiceOperators] WITH CHECK ADD CONSTRAINT [FK_dbo.ServiceOperators_dbo.Services_ServiceId] FOREIGN KEY([ServiceId])
REFERENCES [dbo].[Services] ([Id])
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[ServiceOperators] CHECK CONSTRAINT [FK_dbo.ServiceOperators_dbo.Services_ServiceId]
GO
ALTER TABLE [dbo].[Services] WITH CHECK ADD CONSTRAINT [FK_dbo.Services_dbo.Services_ParentId] FOREIGN KEY([ParentId])
REFERENCES [dbo].[Services] ([Id])
GO
ALTER TABLE [dbo].[Services] CHECK CONSTRAINT [FK_dbo.Services_dbo.Services_ParentId]
GO
ALTER TABLE [dbo].[TicketInvitations] WITH CHECK ADD CONSTRAINT [FK_dbo.TicketInvitations_dbo.Tickets_TicketId] FOREIGN KEY([TicketId])
REFERENCES [dbo].[Tickets] ([Id])
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[TicketInvitations] CHECK CONSTRAINT [FK_dbo.TicketInvitations_dbo.Tickets_TicketId]
GO
ALTER TABLE [dbo].[Tickets] WITH CHECK ADD CONSTRAINT [FK_dbo.Tickets_dbo.Customers_CustomerId] FOREIGN KEY([CustomerId])
REFERENCES [dbo].[Customers] ([Id])
GO
ALTER TABLE [dbo].[Tickets] CHECK CONSTRAINT [FK_dbo.Tickets_dbo.Customers_CustomerId]
GO
ALTER TABLE [dbo].[Tickets] WITH CHECK ADD CONSTRAINT [FK_dbo.Tickets_dbo.ServiceMailboxes_ServiceMailboxId] FOREIGN KEY([ServiceMailboxId])
REFERENCES [dbo].[ServiceMailboxes] ([Id])
GO
ALTER TABLE [dbo].[Tickets] CHECK CONSTRAINT [FK_dbo.Tickets_dbo.ServiceMailboxes_ServiceMailboxId]
GO
ALTER TABLE [dbo].[Tickets] WITH CHECK ADD CONSTRAINT [FK_dbo.Tickets_dbo.Services_ServiceId] FOREIGN KEY([ServiceId])
REFERENCES [dbo].[Services] ([Id])
GO
ALTER TABLE [dbo].[Tickets] CHECK CONSTRAINT [FK_dbo.Tickets_dbo.Services_ServiceId]
GO
ALTER TABLE [dbo].[Tickets] WITH CHECK ADD CONSTRAINT [FK_dbo.Tickets_dbo.Tickets_ParentId] FOREIGN KEY([ParentId])
REFERENCES [dbo].[Tickets] ([Id])
GO
ALTER TABLE [dbo].[Tickets] CHECK CONSTRAINT [FK_dbo.Tickets_dbo.Tickets_ParentId]
GO -
WHERE Clause performance based on order - ? Maybe?
Hello everyone - thanks in advance for the always helpful help. :-)
I have a query...
SELECT
WOH.COMPANY_NUMBER,
WOH.ACCOUNT_NUMBER,
COUNT(WOH.WORK_ORDER_NUMBER) AS TROLLS
FROM
PENDING_WORK_ORDERS PWO
INNER JOIN KAN_WORK_ORDER_MASTER_HISTORY WOH
ON PWO.ACCOUNT_NUMBER = WOH.ACCOUNT_NUMBER
WHERE
WOH.OFFICE_ONLY_FLAG <> 'Y'
AND WOH.WO_STATUS = 'CP'
AND WOH.SCHEDULE_DATE BETWEEN ('1' || O_CHAR(TO_DATE(SUBSTR(PWO.DATE_ENTERED, 2, 6), 'YYMMDD') - 30 , YYMMDD'))
AND PWO.DATE_ENTERED
GROUP BY WOH.COMPANY_NUMBER, WOH.ACCOUNT_NUMBERThe KAN_WORK_ORDER_MASTER_HISTORY file has approx. 10,000,000 records in it. Does the order of the WHERE statement effect performance? This query now takes approx 40 minutes to run - but I was wondering if I reordered the WHERE statement (ie - Bring statements to the top that would cut the results more) - would this work?
If not - any suggestions to speed this up? There are only a small set of these records that we really need - probably about 1% will actually match - is there any way to narrow down this set before there WHERE search is done? The SCHEDULE_DATE field is stored as a number in the format 1YYMMDD - the dates go all the way back to 2003 but we are only needing the last two months work (1071101 - 1071217).
Am I making any sense? Sometimes these issues are very hard to explain...
Thanks again.
Brettso I have to use the TO_DATE and TO_CHAR functions quite a bit - which always seem to slow the query down considerably. There is that to consider that indexes will not get used, unless you try a function based index as also suggested.
The real problem may be the cardinality is impossible to estimate the number of rows each operation may return, and as the volumes are large the impact will be worse.
we are only needing the last two months work (1071101 - 1071217).There are three times more possible values between the numbers as there are days between the two dates, and the optimizer bases the most efficient plan using this information.
SQL> select 1071101 - 1071217 from dual;
1071101-1071217
-116
SQL> select to_date('071101','yymmdd') - to_date('071217','yymmdd') from dual;
TO_DATE('071101','YYMMDD')-TO_DATE('071217','YYMMDD')
-46
this is for a national corporation that i have absolutely no say in how it is formatted or administered....And right now it is going as fast as it has been designed to go. When someone has built and provided you with a farm tractor, you aren't going to be winning any formula 1 or nascar races. -
EJB3 - where to perform JMS JNDI lookups?
Hi, I was reading about how the WebLogic jms wrappers work at:
http://download.oracle.com/docs/cd/E12840_01/wls/docs103/jms/j2ee.html
and noticed this section:
"The JNDI lookups of the Connection Factory and Destination objects can be expensive in terms of performance. This is particularly true if the Destination object points to a Foreign JMS Destination MBean, and therefore, is a lookup on a non-local JNDI provider.". I am using Sonic MQ as my foreign JMS Provider hence this is of particular interest to me.
The document recommends caching these lookups in the ejbCreate() method of an EJB. I'm new to EJB3 but notice there is no concept of an ejbCreate() method there so where should I cache the lookups and how do I ensure they get re-looked up in the case of a connection failure?
Many thanks
MandyThanks very much Tom, this helps a lot. I think my confusion lay in the fact that this document talks about caching the JNDI lookups in the ejbCreate and gives the example PoolTestBean.java which uses EJB2 style code. I've made your recommended changes to my code, would you mind just casting your eye over to see if looks ok? I have chosen to cache on create of the bean rather than on first invocation as I want clients to fail on startup rather than during their processing.
Sorry about code layout, not sure how to use HTML in posts to make it verbatim..
@Stateless
@TransactionAttribute(NEVER)
//@ExcludeDefaultInterceptors
public class ServiceWrapperBean implements ServiceWrapper {
// injected resources
@Resource
private SessionContext sctx; // inject the bean context
@Resource(name = "sonicConnectionFactory", mappedName = "sonic.connFactory", shareable = true)
private ConnectionFactory connectionFactory;
@Resource(name = "LegacyAccessIn", mappedName = "queue/LegacyIn", shareable = true)
private Destination sendQueue;
public void sendMessage(String msg) {
if (connectionFactory == null)
connectionFactory = (javax.jms.ConnectionFactory) sctx
.lookup("sonicConnectionFactory");
if (sendQueue == null)
sendQueue = (javax.jms.Destination) sctx.lookup("LegacyAccessIn");
if (msg == null)
throw new IllegalArgumentException("object cannot be null!");
Connection con = null;
Session session = null;
MessageProducer sender = null;
try {
con = connectionFactory.createConnection();
session = con.createSession(true, Session.AUTO_ACKNOWLEDGE);
sender = session.createProducer(null);
Message message = session.createTextMessage("do stuff");
sender.send(sendQueue, message);
} catch (JMSException e) {
// Invalidate the JNDI objects if there is a failure
// this is necessary because the destination object
// may become invalid if the destination server has
// been shut down
connectionFactory = null;
sendQueue = null;
throw new RuntimeException(e);
} finally {
if (con != null) {
try {
// Return JMS resources to the resource reference pool for later re-use.
// Closing a connection automatically also closes its sessions, etc.
con.close(); // also closes other objects
} catch (JMSException je) {
// ignore
} -
Where to perform date conversion for display?
Hi,
We have started our application with operation for single country, all data located in single database. The transaction history time stored is all in GMT+8.
However, with business expansion, we now have to cater for multiple country that is of different timezone. I need to display the transaction history in local time to the user of the different timezone.
Gut feel says i should continue to store all the time in a single timezone (i.e. probably GMT+8 for my case). And perform conversion during for the display.
Question is where should i be doing this? I am using Struts for web tier, EJB and Mysql. Based on MVC pattern, it seems that this should be done in Struts layer (probably DispatchAction).
Any recommended pattern? Any idea how to handle daylight saving?Yes, you should store all your timestamps in the same timezone. And it would be best if your database server uses the same timezone as your application server, otherwise you could run into problems.
Now, the java.util.Date object and its SQL relative java.sql.Timestamp consist only of a single number, which is the number of milliseconds since a certain point in time. Normally they are what you get from the database. The thing to remember about them is that they do not have a timezone. As you suspect, displaying them applies a timezone to them. If the displaying is done by SimpleDateFormat -- which it should be, whether you do it in code or Struts does it -- you just need to make sure that the SimpleDateFormat has been assigned the correct timezone. This takes care of daylight savings time as well.
Finding out what timezone your users are in may be more tricky than you expect, but this depends on where they are located. You can't necessarily derive a timezone from a country, so you may have to ask the user what their timezone is. -
Where to perform UCCHECK etc in a UNICODE System or NonUNicode System
Hi ,
Inorder to find out Our system is Unicode System or not , generally we can do that by ...... System --> Status .
Here in my case My Sytem is not a Unicode System.
But My project is a Combined Upgrade and Unicode . We are doing the UCCHECK for Unicode Errors.
My doubt is We have to perform UCCHECK etc in a UNICODE System or in a NonUNicode System ?
Thanks in advance
Sriram..Hi
The Link will be helpful to you.
Re: Upgrade 4.6 to ECC - What are the responsibilites
regarding Unicode influence in Standard programs
Very good document:
http://www.doag.org/pub/docs/sig/sap/2004-03/Buhlinger_Maxi_Version.pdf
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/d37d1ad9-0b01-0010-ed9f-bc3222312dd8
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/589d18d9-0b01-0010-ac8a-8a22852061a2
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f8e316d9-0b01-0010-8e95-829a58c1511a
You need to use the transaction UCCHECK.
The report documentation is here
ABAP Unicode Scan Tool UCCHECK
You can use transaction UCCHECK to examine a Unicode program set for syntax errors without having to set the program attribute "Unicode checks active" for every individual program. From the list of Unicode syntax errors, you can go directly to the affected programs and remove the errors. It is also possible to automatically create transport requests and set the Unicode program attribute for a program set.
Some application-specific checks, which draw your attention to program points that are not Unicode-compatible, are also integrated.
Selection of Objects:
The program objects can be selected according to object name, object type, author (TADIR), package, and original system. For the Unicode syntax check, only object types for which an independent syntax check can be carried out are appropriate. The following object types are possibilities:
PROG Report
CLAS Class
FUGR Function groups
FUGX Function group (with customer include, customer area)
FUGS Function group (with customer include, SAP area)
LDBA Logical Database
CNTX Context
TYPE Type pool
INTF Interface
Only Examine Programs with Non-Activated Unicode Flag
By default, the system only displays program objects that have not yet set the Unicode attribute. If you want to use UCCHECK to process program objects that have already set the attribute, you can deactivate this option.
Only Objects with TADIR Entry
By default, the system only displays program objects with a TADIR entry. If you want to examine programs that don't have a TADIR entry, for example locally generated programs without a package, you can deactivate this option.
Exclude Packages $*
By default, the system does not display program objects that are in a local, non-transportable package. If you want to examine programs that are in such a package, you can deactivate this option.
Display Modified SAP Programs Also
By default, SAP programs are not checked in customer systems. If you also want to check SAP programs that were modified in a customer system (see transaction SE95), you can activate this option.
Maximum Number of Programs:
To avoid timeouts or unexpectedly long waiting times, the maximum number of program objects is preset to 50. If you want to examine more objects, you must increase the maximum number or run a SAMT scan (general program set processing). The latter also has the advantage that the data is stored persistently. Proceed as follows:
- Call transaction SAMT
- Create task with program RSUNISCAN_FINAL, subroutine SAMT_SEARCH
For further information refer to documentation for transaction SAMT.
Displaying Points that Cannot Be Analyzed Statically
If you choose this option, you get an overview of the program points, where a static check for Unicode syntax errors is not possible. This can be the case if, for example, parameters or field symbols are not typed or you are accessing a field or structure with variable length/offset. At these points the system only tests at runtime whether the code is sufficient for the stricter Unicode tests. If possible, you should assign types to the variables used, otherwise you must check runtime behavior after the Unicode attribute has been set.
To be able to differentiate between your own and foreign code (for example when using standard includes or generated includes), there is a selection option for the includes to be displayed. By default, the system excludes the standard includes of the view maintenance LSVIM* from the display, because they cause a large number of messages that are not relevant for the Unicode conversion. It is recommended that you also exclude the generated function group-specific includes of the view maintenance (usually L<function group name>F00 and L<function group name>I00) from the display.
Similarly to the process in the extended syntax check, you can hide the warning using the pseudo comment ("#EC *).
Applikation-Specific Checks
These checks indicate program points that represent a public interface but are not Unicode-compatible. Under Unicode, the corresponding interfaces change according to the referenced documentation and must be adapted appropriately.
View Maintenance
Parts of the view maintenance generated in older releases are not Unicode-compatible. The relevant parts can be regenerated with a service report.
UPLOAD/DOWNLOAD
The function modules UPLOAD, DOWNLOAD or WS_UPLOAD and WS_DOWNLOAD are obsolete and cannot run under Unicode. Refer to the documentation for these modules to find out which routines serve as replacements.
Regards
Anji -
Where is "performance" option in PSE 12?
So, I want to check that my OS SSD is not being used as a scratch disk..... I thought it was simple - edit>preferences>performance. But I have no performance option - what's up?
It's only in the editor, not the organizer.
-
Where to perform System.loadLibraryI()
I have a Java web app that uses JSPs and servlets. The JSPs use an object that needs a DLL loaded through System.loadLibrary().
When I put the code into the object's static initializer, I get an "Another loader has already loaded this library" exception.
First of all, the javadocs say that this function doesn't throw an exception if the library is already loaded.
Second of all, where is a good place to load this DLL?
Brett Slocum
[email protected]P.S. I've tried putting the loadLibrary() call in the init() of a servlet in the system, but then I get a "library not loaded" message in the JSP.
-
Performance Issues with Debugging even in Display Mode
Hi not certain if this would sit in Security, ABAP or Basis, but lets start here as it is security related.
S_DEVELOP with any activity on DEBUG on a production system is a concern, but what are the performance related issues when a super user has to go into debug in display only on a production system because of a really complex issue?
I've heard in the past of a scenario where system performance was impacted, and we have notes around the allocation of S_DEVELOP display DEBUG access to this point. (I've summarised these below)
The risk with debug is associated with the length of time that the actual debugging process is being performed.
u2022 Work processes are dedicated solely to the users for the duration of the debug. If these are being performed for a long time, these can cause issues with not enough work processes being available.
u2022 It can cause DB2 locks. If the debug session last awhile, DB2 locks are not released. This impacts the availability of tablespaces, thus, affecting various transactions running across the system.
Even with these concerns, security will often get asked for debug display access.
As security is about risk identification, assessment and then controlled access what do other organisations do?
Options (not exhaustive) are "No Debug ever" or "Debug display only via a fire fight or super user on a time limited basis".
We are currently in the "debug display only via fire fight" camp, but would like to canvas opinion on this.
As one of the concepts of security is Availability of data (and to an extent ensuring the systems are up and running) do the performance risks push the security function to the "No Debug Ever" stance.If you need to debug in production, then 9 times out of 10 you need to do root-cause analysis: The developer is the problem.
Writing sloppy code and not testing properly should not be an excuse for debugging in production.
But of course, there are exceptions even when you do try to keep them to a minimum.
To add to Jurjen's comments, also note that the debugger only has a limited capability of doing a rollback. So you can quite easily and unintentionally create inconsistencies in the system - also in display mode - which is an integrity problem, and typically more critical than availability problems or even potential confidentiality concerns.
Cheers,
Julius
Edited by: Julius Bussche on May 15, 2009 10:50 AM -
Improving the performance of Crystal Reports for Eclipse 2.0
Hi,
I am having some performance issues with displaying reports where it can take upto 30 seconds per user for each new session for the report to display. If we run this directly from the client (through Crystal 2008) it takes about 2 seconds.
The product only has 4 different rpt files but are constantly viewed by the clients (although with diffferent parameters). The users tend to come onto the system browse a couple of reports and log-off. They will do this about 3 - 5 x a day.
1) Can you cache the reports at an application level (rather than the session) (and is it recommended)
2) Create a separate web-service just hosting the Crystal Reports
3) Other mechanisms of calling the report (currently using addDiscreteParameterValue, replaceConnection, Logon [this replaceConnection and Logon is done for the master and every subreport], then using processHttpRequest
Anyone got any advice , recommendations or pearls of wisdom?
Probably have max 15 concurrent users which process these 4 reports.
Kind regards
Matt.CR4E 2.0 currently uses 5 CPLs - 5 concurrent process license, which means it services up to five concurrent report requests. This isn't per-session, per-report, or per-user, but per-request (open report, next page, export).
So having 15 simultaneous users may lead to some requests being queued till a license is free.
For a more scalable solution, the recommendation is to go with a server-client solution like Crystal Reports Server Embedded or BusinessObjects Enterprise.
But to tune your CR4E app to see how much you'll be able to service, what I recommend is turning on Log4J logging to see where the performance is going.
Going from 2 sec to 30 sec between CR Designer (binary app) to CRJ (pure Java app) isn't out of performance expectations, but there may be ways to tune it.
For example, if you're doing replaceConnection or setTableLocation, you may just want to do it once to the rpt file during deployment, so you'd not need to change the connection info every time the report is run.
Saving to application context isn't something that CRJ is designed for - it's designed to have the ReportSource per-Session.
Sincerely,
Ted Ueda -
Database migrated from Oracle 10g to 11g Discoverer report performance issu
Hi All,
We are now getting issue in Discoverer Report performance as the report is keep on running when database got upgrade from 10g to 11g.
In database 10g the report is working fine but the same report is not working fine in 11g.
The query i have changed as I have passed the date format TO_CHAR("DD-MON-YYYY" and removed the NVL & TRUNC function from the existing query.
The report is now working fine in Database 11g backhand but when I am using the same query in Discoverer it is not working and report is keep on running.
Please advise.
Regards,Pl post exact OS, database and Discoverer versions. After the upgrade, have statistics been updated ? Have you traced the Discoverer query to determine where the performance issue is ?
How To Find Oracle Discoverer Diagnostic and Tracing Guides [ID 290658.1]
How To Enable SQL Tracing For Discoverer Sessions [ID 133055.1]
Discoverer 11g: Performance degradation after Upgrade to Database 11g [ID 1514929.1]
HTH
Srini -
Getting realistic performance expectations.
I am running tests to see if I can use the Oracle Berkeley XML database as a backend to a web application but am running into query response performance limitations. As per the suggestions for performance related questions, I have pulled together answers to the series of questions that need to be addressed, and they are given below. The basic issue at stake, however, is am I being realistic about what I can expect to achieve with the database?
Regards
Geoff Shuetrim
Oracle Berkeley DB XML database performance.
Berkeley DB XML Performance Questionnaire
1. Describe the Performance area that you are measuring? What is the
current performance? What are your performance goals you hope to
achieve?
I am using the database as a back end to a web application that is expected
to field a large number of concurrent queries.
The database scale is described below.
Current performance involves responses to simple queries that involve 1-2
minute turn around (this improves after a few similar queries have been run,
presumably because of caching, but not to a point that is acceptable for
web applications).
Desired performance is for queries to execute in milliseconds rather than
minutes.
2. What Berkeley DB XML Version? Any optional configuration flags
specified? Are you running with any special patches? Please specify?
Berkeley DB XML Version: 2.4.16.1
Configuration flags: enable-java -b 64 prefix=/usr/local/BerkeleyDBXML-2.4.16
No special patches have been applied.
3. What Berkeley DB Version? Any optional configuration flags
specified? Are you running with any special patches? Please Specify.
Berkeley DB Version? 4.6.21
Configuration flags: None. The Berkeley DB was built and installed as part of the
Oracle Berkeley XML database build and installation process.
No special patches have been applied.
4. Processor name, speed and chipset?
Intel Core 2 CPU 6400 @ 2.13 GHz (1066 FSB) (4MB Cache)
5. Operating System and Version?
Ubuntu Linux 8.04 (Hardy) with the 2.6.24-23 generic kernel.
6. Disk Drive Type and speed?
300 GB 7200RPM hard drive.
7. File System Type? (such as EXT2, NTFS, Reiser)
EXT3
8. Physical Memory Available?
Memory: 3.8GB DDR2 SDRAM
9. Are you using Replication (HA) with Berkeley DB XML? If so, please
describe the network you are using, and the number of Replica’s.
No.
10. Are you using a Remote Filesystem (NFS) ? If so, for which
Berkeley DB XML/DB files?
No.
11. What type of mutexes do you have configured? Did you specify
–with-mutex=? Specify what you find inn your config.log, search
for db_cv_mutex?
I did not specify -with-mutex when building the database.
config.log indicates:
db_cv_mutex=POSIX/pthreads/library/x86_64/gcc-assembly
12. Which API are you using (C++, Java, Perl, PHP, Python, other) ?
Which compiler and version?
I am using the Java API.
I am using the gcc 4.2.4 compiler.
I am using the g++ 4.2.4 compiler.
13. If you are using an Application Server or Web Server, please
provide the name and version?
I am using the Tomcat 5.5 application server.
It is not using the Apache Portable Runtime library.
It is being run using a 64 bit version of the Sun Java 1.5 JRE.
14. Please provide your exact Environment Configuration Flags (include
anything specified in you DB_CONFIG file)
I do not have a DB_CONFIG file in the database home directory.
My environment configuration is as follows:
Threaded = true
AllowCreate = true
InitializeLocking = true
ErrorStream = System.err
InitializeCache = true
Cache Size = 1024 * 1024 * 500
InitializeLogging = true
Transactional = false
TrickleCacheWrite = 20
15. Please provide your Container Configuration Flags?
My container configuration is done using the Java API.
The container creation code is:
XmlContainerConfig containerConfig = new XmlContainerConfig();
containerConfig.setStatisticsEnabled(true);
XmlContainer container = xmlManager.createContainer("container",containerConfig);I am guessing that this means that the only flag I have set is the one
that enables recording of statistics to use in query optimization.
I have no other container configuration information to provide.
16. How many XML Containers do you have?
I have one XML container.
The container has 2,729,465 documents.
The container is a node container rather than a wholedoc container.
Minimum document size is around 1Kb.
Maximum document size is around 50Kb.
Average document size is around 2Kb.
I am using document data as part of the XQueries being run. For
example, I condition query results upon the values of attributes
and elements in the stored documents.
The database has the following indexes:
xmlIndexSpecification = dataContainer.getIndexSpecification();
xmlIndexSpecification.replaceDefaultIndex("node-element-presence");
xmlIndexSpecification.addIndex(Constants.XBRLAPINamespace,"fragment","node-element-presence");
xmlIndexSpecification.addIndex(Constants.XBRLAPINamespace,"data","node-element-presence");
xmlIndexSpecification.addIndex(Constants.XBRLAPINamespace,"xptr","node-element-presence");
xmlIndexSpecification.addIndex("","stub","node-attribute-presence");
xmlIndexSpecification.addIndex("","index", "unique-node-attribute-equality-string");
xmlIndexSpecification.addIndex(Constants.XBRL21LinkNamespace,"label","node-element-substring-string");
xmlIndexSpecification.addIndex(Constants.GenericLabelNamespace,"label","node-element-substring-string");
xmlIndexSpecification.addIndex("","name","node-attribute-substring-string");
xmlIndexSpecification.addIndex("","parentIndex", "node-attribute-equality-string");
xmlIndexSpecification.addIndex("","uri", "node-attribute-equality-string");
xmlIndexSpecification.addIndex("","type", "node-attribute-equality-string");
xmlIndexSpecification.addIndex("","targetDocumentURI", "node-attribute-equality-string");
xmlIndexSpecification.addIndex("","targetPointerValue", "node-attribute-equality-string");
xmlIndexSpecification.addIndex("","absoluteHref", "node-attribute-equality-string");
xmlIndexSpecification.addIndex("","id","node-attribute-equality-string");
xmlIndexSpecification.addIndex("","value", "node-attribute-equality-string");
xmlIndexSpecification.addIndex("","arcroleURI", "node-attribute-equality-string");
xmlIndexSpecification.addIndex("","roleURI", "node-attribute-equality-string");
xmlIndexSpecification.addIndex("","name", "node-attribute-equality-string");
xmlIndexSpecification.addIndex("","targetNamespace", "node-attribute-equality-string");
xmlIndexSpecification.addIndex("","contextRef", "node-attribute-equality-string");
xmlIndexSpecification.addIndex("","unitRef", "node-attribute-equality-string");
xmlIndexSpecification.addIndex("","scheme", "node-attribute-equality-string");
xmlIndexSpecification.addIndex("","value", "node-attribute-equality-string");
xmlIndexSpecification.addIndex(Constants.XBRL21Namespace,"identifier", "node-element-equality-string");
xmlIndexSpecification.addIndex(Constants.XMLNamespace,"lang","node-attribute-equality-string");
xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"label","node-attribute-equality-string");
xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"from","node-attribute-equality-string");
xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"to","node-attribute-equality-string");
xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"type","node-attribute-equality-string");
xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"arcrole","node-attribute-equality-string");
xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"role","node-attribute-equality-string");
xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"label","node-attribute-equality-string");
xmlIndexSpecification.addIndex(Constants.XBRLAPILanguagesNamespace,"language","node-element-presence");
xmlIndexSpecification.addIndex(Constants.XBRLAPILanguagesNamespace,"code","node-element-equality-string");
xmlIndexSpecification.addIndex(Constants.XBRLAPILanguagesNamespace,"value","node-element-equality-string");
xmlIndexSpecification.addIndex(Constants.XBRLAPILanguagesNamespace,"encoding","node-element-equality-string");17. Please describe the shape of one of your typical documents? Please
do this by sending us a skeleton XML document.
The following provides the basic information about the shape of all documents
in the data store.
<ns:fragment xmlns:ns="..." attrs...(about 20 of them)>
<ns:data>
Single element that varies from document to document but that
is rarely more than a few small elements in size and (in some cases)
a lengthy section of string content for the single element.
</ns:data>
</ns:fragment>18. What is the rate of document insertion/update required or
expected? Are you doing partial node updates (via XmlModify) or
replacing the document?
Document insertion rates are not a first order performance criteria.
I do no document modifications using XmlModify.
When doing updates I replace the original document.
19. What is the query rate required/expected?
Not sure how to provide metrics for this but a single web page is
being generated, this can involve hundreds of queries. each of which
should be trivial to execute given the indexing strategy in use.
20. XQuery -- supply some sample queries
1. Please provide the Query Plan
2. Are you using DBXML_INDEX_NODES?
I am using DBXML_INDEX_NODES by default because I
am using a node container rather than a whole document
container.
3. Display the indices you have defined for the specific query.
4. If this is a large query, please consider sending a smaller
query (and query plan) that demonstrates the problem.
Example queries.
1. collection('browser')/*[@parentIndex='none']
<XQuery>
<QueryPlanToAST>
<LevelFilterQP>
<StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
<ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="parentIndex" value="none"/>
</StepQP>
</LevelFilterQP>
</QueryPlanToAST>
</XQuery>2. collection('browser')/*[@stub]
<XQuery>
<QueryPlanToAST>
<LevelFilterQP>
<StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
<PresenceQP container="browser" index="node-attribute-presence-none" operation="eq" child="stub"/>
</StepQP>
</LevelFilterQP>
</QueryPlanToAST>
</XQuery>3. qplan "collection('browser')/*[@type='org.xbrlapi.impl.ConceptImpl' or @parentIndex='asdfv_3']"
<XQuery>
<QueryPlanToAST>
<LevelFilterQP>
<StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
<UnionQP>
<ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="type" value="org.xbrlapi.impl.ConceptImpl"/>
<ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="parentIndex" value="asdfv_3"/>
</UnionQP>
</StepQP>
</LevelFilterQP>
</QueryPlanToAST>
</XQuery>4.
setnamespace xlink http://www.w3.org/1999/xlink
qplan "collection('browser')/*[@uri='http://www.xbrlapi.org/my/uri' and */*[@xlink:type='resource' and @xlink:label='description']]"
<XQuery>
<QueryPlanToAST>
<LevelFilterQP>
<NodePredicateFilterQP uri="" name="#tmp8">
<StepQP axis="parent-of-child" uri="*" name="*" nodeType="element">
<StepQP axis="parent-of-child" uri="*" name="*" nodeType="element">
<NodePredicateFilterQP uri="" name="#tmp1">
<StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
<ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="label:http://www.w3.org/1999/xlink"
value="description"/>
</StepQP>
<AttributeJoinQP>
<VariableQP name="#tmp1"/>
<ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="type:http://www.w3.org/1999/xlink"
value="resource"/>
</AttributeJoinQP>
</NodePredicateFilterQP>
</StepQP>
</StepQP>
<AttributeJoinQP>
<VariableQP name="#tmp8"/>
<ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="uri" value="http://www.xbrlapi.org/my/uri"/>
</AttributeJoinQP>
</NodePredicateFilterQP>
</LevelFilterQP>
</QueryPlanToAST>
</XQuery>21. Are you running with Transactions? If so please provide any
transactions flags you specify with any API calls.
I am not running with transactions.
22. If your application is transactional, are your log files stored on
the same disk as your containers/databases?
The log files are stored on the same disk as the container.
23. Do you use AUTO_COMMIT?
Yes. I think that it is a default feature of the DocumentConfig that
I am using.
24. Please list any non-transactional operations performed?
I do document insertions and I do XQueries.
25. How many threads of control are running? How many threads in read
only mode? How many threads are updating?
One thread is updating. Right now one thread is running queries. I am
not yet testing the web application with concurrent users given the
performance issues faced with a single user.
26. Please include a paragraph describing the performance measurements
you have made. Please specifically list any Berkeley DB operations
where the performance is currently insufficient.
I have loaded approximately 7 GB data into the container and then tried
to run the web application using that data. This involves running a broad
range of very simple queries, all of which are expected to be supported
by indexes to ensure that they do not require XML document traversal activity.
Querying performance is insufficient, with even the most basic queries
taking several minutes to complete.
27. What performance level do you hope to achieve?
I hope to be able to run a web application that simultaneously handles
page requests from hundreds of users, each of which involves a large
number of database queries.
28. Please send us the output of the following db_stat utility commands
after your application has been running under "normal" load for some
period of time:
% db_stat -h database environment -c
1038 Last allocated locker ID
0x7fffffff Current maximum unused locker ID
9 Number of lock modes
1000 Maximum number of locks possible
1000 Maximum number of lockers possible
1000 Maximum number of lock objects possible
155 Number of current locks
157 Maximum number of locks at any one time
200 Number of current lockers
200 Maximum number of lockers at any one time
13 Number of current lock objects
17 Maximum number of lock objects at any one time
1566M Total number of locks requested (1566626558)
1566M Total number of locks released (1566626403)
0 Total number of locks upgraded
852 Total number of locks downgraded
3 Lock requests not available due to conflicts, for which we waited
0 Lock requests not available due to conflicts, for which we did not wait
0 Number of deadlocks
0 Lock timeout value
0 Number of locks that have timed out
0 Transaction timeout value
0 Number of transactions that have timed out
712KB The size of the lock region
21807 The number of region locks that required waiting (0%)
% db_stat -h database environment -l
0x40988 Log magic number
13 Log version number
31KB 256B Log record cache size
0 Log file mode
10Mb Current log file size
0 Records entered into the log
28B Log bytes written
28B Log bytes written since last checkpoint
1 Total log file I/O writes
0 Total log file I/O writes due to overflow
1 Total log file flushes
0 Total log file I/O reads
1 Current log file number
28 Current log file offset
1 On-disk log file number
28 On-disk log file offset
1 Maximum commits in a log flush
0 Minimum commits in a log flush
96KB Log region size
0 The number of region locks that required waiting (0%)
% db_stat -h database environment -m
500MB Total cache size
1 Number of caches
1 Maximum number of caches
500MB Pool individual cache size
0 Maximum memory-mapped file size
0 Maximum open file descriptors
0 Maximum sequential buffer writes
0 Sleep after writing maximum sequential buffers
0 Requested pages mapped into the process' address space
1749M Requested pages found in the cache (99%)
722001 Requested pages not found in the cache
911092 Pages created in the cache
722000 Pages read into the cache
4175142 Pages written from the cache to the backing file
1550811 Clean pages forced from the cache
19568 Dirty pages forced from the cache
3 Dirty pages written by trickle-sync thread
62571 Current total page count
62571 Current clean page count
0 Current dirty page count
65537 Number of hash buckets used for page location
1751M Total number of times hash chains searched for a page (1751388600)
8 The longest hash chain searched for a page
3126M Total number of hash chain entries checked for page (3126038333)
4535 The number of hash bucket locks that required waiting (0%)
278 The maximum number of times any hash bucket lock was waited for (0%)
1 The number of region locks that required waiting (0%)
0 The number of buffers frozen
0 The number of buffers thawed
0 The number of frozen buffers freed
1633189 The number of page allocations
4301013 The number of hash buckets examined during allocations
259 The maximum number of hash buckets examined for an allocation
1570522 The number of pages examined during allocations
1 The max number of pages examined for an allocation
184 Threads waited on page I/O
Pool File: browser
8192 Page size
0 Requested pages mapped into the process' address space
1749M Requested pages found in the cache (99%)
722001 Requested pages not found in the cache
911092 Pages created in the cache
722000 Pages read into the cache
4175142 Pages written from the cache to the backing file
% db_stat -h database environment -r
Not applicable.
% db_stat -h database environment -t
Not applicable.
vmstat
r b swpd free buff cache si so bi bo in cs us sy id wa
1 4 40332 773112 27196 1448196 0 0 173 239 64 1365 19 4 72 5
iostat
Linux 2.6.24-23-generic (dell) 06/02/09
avg-cpu: %user %nice %system %iowait %steal %idle
18.37 0.01 3.75 5.67 0.00 72.20
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 72.77 794.79 1048.35 5376284 7091504
29. Are there any other significant applications running on this
system? Are you using Berkeley DB outside of Berkeley DB XML?
Please describe the application?
No other significant applications are running on the system.
I am not using Berkeley DB outside of Berkeley DB XML.
The application is a web application that organises the data in
the stored documents into hypercubes that users can slice/dice and analyse.
Edited by: Geoff Shuetrim on Feb 7, 2009 2:23 PM to correct the appearance of the query plans.Hi Geoff,
Thanks for filling out the performance questionnaire. Unfortunately the forum software seems to have destroyed some of your queries - you might want to use \[code\] and \[code\] to markup your queries and query plans next time.
Geoff Shuetrim wrote:
Current performance involves responses to simple queries that involve 1-2
minute turn around (this improves after a few similar queries have been run,
presumably because of caching, but not to a point that is acceptable for
web applications).
Desired performance is for queries to execute in milliseconds rather than
minutes.I think that this is a reasonable expectation in most cases.
14. Please provide your exact Environment Configuration Flags (include
anything specified in you DB_CONFIG file)
I do not have a DB_CONFIG file in the database home directory.
My environment configuration is as follows:
Threaded = true
AllowCreate = true
InitializeLocking = true
ErrorStream = System.err
InitializeCache = true
Cache Size = 1024 * 1024 * 500
InitializeLogging = true
Transactional = false
TrickleCacheWrite = 20If you are performing concurrent reads and writes, you need to enable transactions in the both the environment and the container.
Example queries.
1. collection('browser')/*[@parentIndex='none']
<XQuery>
<QueryPlanToAST>
<LevelFilterQP>
<StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
<ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="parentIndex" value="none"/>
</StepQP>
</LevelFilterQP>
</QueryPlanToAST>
</XQuery>
I have two initial observations about this query:
1) It looks like it could return a lot of results - a query that returns a lot of results will always be slow. If you only want a subset of the results, use lazy evalulation, or put an explicit call to the subsequence() function in the query.
2) An explicit element name with an index on it often performs faster than a "*" step. I think you'll get faster query execution if you specify the document element name rather than "*", and then add a "node-element-presence" index on it.
3) Generally descendant axis is faster than child axis. If you just need the document rather than the document (root) element, you might find that this query is a little faster (any document with a "parentIndex" attribute whose value is "none"):
collection()[descendant::*/@parentIndex='none']Similar observations apply to the other queries you posted.
Get back to me if you're still having problems with specific queries.
John -
Regarding Internal table and access performance
hey guys.
In my report , Somehow i reduced the query performance time by selecting minimum key fields and moved the selected records to internal table.
Now from this internal table i am restricting the loop
as per my requirements using where statements.(believing that internal table retrieval is more faster than database acces(using query)).
But still my performance goes down.
Could you pls suggest me how to reduce the execution time
in abap programming.
I used below commands.
Read using binary search.
loop ...where statement.
perform statements.
collect staements.
delete itab.(delete duplicates staements too)
sort itab(sorting).
For each above statements do we have any faster way to retrieval records.
If i see my bottle neck at se30.it shows
ABAP programming to 70 percent
database access to 20 percent
R3 system as 10percent.
now how to reduce this abap process.
could you pls reply.
ambichan.
ambichan.Hello Ambichan,
It is difficult to suggest the improvements without looking at the actual code that you are running. However, I can give you some general information.
1. READ using the BINARY SEARCH addition.
This is indeed a good way of doing a READ. But have you made sure that the internal table is <i>sorted by the required fields</i> before you use this statement ?
2. LOOP...WHERE statement.
This is also a good way to avoid looping through unnecessary entries. But further improvement can certainly be achieved if you use FIELD-SYMBOLS.
LOOP AT ITAB INTO <FIELD_SYMBOL_OF_THE_SAME_LINE-TYPE_AS_ITAB>.
ENDLOOP.
3. PERFORM statements.
A perform statement can not be optimized. what matters is the code that you write inside the FORM (or a subroutine).
4. COLLECT statements.
I trust you have used the COLLECT statement to simplify the logic. Let that be as it is. The code is more readable and elegant.
The COLLECT statement is somewhat performance intensive. It takes more time with a normal internal table (STANDARD). See if you can use an internal table of type SORTED. Even better, you can use a HASHED internal table.
5. DELETE itab.(delete duplicates staements too)
If you are making sure that you are deleting several entries based on a condition, then this should be okay. You cannot avoid using the DELETE statement if your functionality requires you to do so.
Also, before deleting the DUPLICATES, ensure that the internal table is sorted.
6. SORT statement.
It depends on how many entries there are in the internal table. If you are using most of the above points on the same internal table, then it is better that you define your internal table to be of type SORTED. That way, inserting entries will take a little more time (to ensure that the table is always sorted), but alll the other operations are going to be much faster.
Get back to me if you need further assistance.
Regards,
<a href="https://www.sdn.sap.com:443http://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.sdn.businesscard.sdnbusinesscard?u=zwcc%2fwm4ups%3d">anand Mandalika</a>. -
Need help with Berkeley XML DB Performance
We need help with maximizing performance of our use of Berkeley XML DB. I am filling most of the 29 part question as listed by Oracle's BDB team.
Berkeley DB XML Performance Questionnaire
1. Describe the Performance area that you are measuring? What is the
current performance? What are your performance goals you hope to
achieve?
We are measuring the performance while loading a document during
web application startup. It is currently taking 10-12 seconds when
only one user is on the system. We are trying to do some testing to
get the load time when several users are on the system.
We would like the load time to be 5 seconds or less.
2. What Berkeley DB XML Version? Any optional configuration flags
specified? Are you running with any special patches? Please specify?
dbxml 2.4.13. No special patches.
3. What Berkeley DB Version? Any optional configuration flags
specified? Are you running with any special patches? Please Specify.
bdb 4.6.21. No special patches.
4. Processor name, speed and chipset?
Intel Xeon CPU 5150 2.66GHz
5. Operating System and Version?
Red Hat Enterprise Linux Relase 4 Update 6
6. Disk Drive Type and speed?
Don't have that information
7. File System Type? (such as EXT2, NTFS, Reiser)
EXT3
8. Physical Memory Available?
4GB
9. Are you using Replication (HA) with Berkeley DB XML? If so, please
describe the network you are using, and the number of Replica’s.
No
10. Are you using a Remote Filesystem (NFS) ? If so, for which
Berkeley DB XML/DB files?
No
11. What type of mutexes do you have configured? Did you specify
–with-mutex=? Specify what you find inn your config.log, search
for db_cv_mutex?
None. Did not specify -with-mutex during bdb compilation
12. Which API are you using (C++, Java, Perl, PHP, Python, other) ?
Which compiler and version?
Java 1.5
13. If you are using an Application Server or Web Server, please
provide the name and version?
Oracle Appication Server 10.1.3.4.0
14. Please provide your exact Environment Configuration Flags (include
anything specified in you DB_CONFIG file)
Default.
15. Please provide your Container Configuration Flags?
final EnvironmentConfig envConf = new EnvironmentConfig();
envConf.setAllowCreate(true); // If the environment does not
// exist, create it.
envConf.setInitializeCache(true); // Turn on the shared memory
// region.
envConf.setInitializeLocking(true); // Turn on the locking subsystem.
envConf.setInitializeLogging(true); // Turn on the logging subsystem.
envConf.setTransactional(true); // Turn on the transactional
// subsystem.
envConf.setLockDetectMode(LockDetectMode.MINWRITE);
envConf.setThreaded(true);
envConf.setErrorStream(System.err);
envConf.setCacheSize(1024*1024*64);
envConf.setMaxLockers(2000);
envConf.setMaxLocks(2000);
envConf.setMaxLockObjects(2000);
envConf.setTxnMaxActive(200);
envConf.setTxnWriteNoSync(true);
envConf.setMaxMutexes(40000);
16. How many XML Containers do you have? For each one please specify:
One.
1. The Container Configuration Flags
XmlContainerConfig xmlContainerConfig = new XmlContainerConfig();
xmlContainerConfig.setTransactional(true);
xmlContainerConfig.setIndexNodes(true);
xmlContainerConfig.setReadUncommitted(true);
2. How many documents?
Everytime the user logs in, the current xml document is loaded from
a oracle database table and put it in the Berkeley XML DB.
The documents get deleted from XML DB when the Oracle application
server container is stopped.
The number of documents should start with zero initially and it
will grow with every login.
3. What type (node or wholedoc)?
Node
4. Please indicate the minimum, maximum and average size of
documents?
The minimum is about 2MB and the maximum could 20MB. The average
mostly about 5MB.
5. Are you using document data? If so please describe how?
We are using document data only to save changes made
to the application data in a web application. The final save goes
to the relational database. Berkeley XML DB is just used to store
temporary data since going to the relational database for each change
will cause severe performance issues.
17. Please describe the shape of one of your typical documents? Please
do this by sending us a skeleton XML document.
Due to the sensitive nature of the data, I can provide XML schema instead.
18. What is the rate of document insertion/update required or
expected? Are you doing partial node updates (via XmlModify) or
replacing the document?
The document is inserted during user login. Any change made to the application
data grid or other data components gets saved in Berkeley DB. We also have
an automatic save every two minutes. The final save from the application
gets saved in a relational database.
19. What is the query rate required/expected?
Users will not be entering data rapidly. There will be lot of think time
before the users enter/modify data in the web application. This is a pilot
project but when we go live with this application, we will expect 25 users
at the same time.
20. XQuery -- supply some sample queries
1. Please provide the Query Plan
2. Are you using DBXML_INDEX_NODES?
Yes.
3. Display the indices you have defined for the specific query.
XmlIndexSpecification spec = container.getIndexSpecification();
// ids
spec.addIndex("", "id", XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
spec.addIndex("", "idref", XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
// index to cover AttributeValue/Description
spec.addIndex("", "Description", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ELEMENT | XmlIndexSpecification.KEY_SUBSTRING, XmlValue.STRING);
// cover AttributeValue/@value
spec.addIndex("", "value", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
// item attribute values
spec.addIndex("", "type", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
// default index
spec.addDefaultIndex(XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ELEMENT | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
spec.addDefaultIndex(XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
// save the spec to the container
XmlUpdateContext uc = xmlManager.createUpdateContext();
container.setIndexSpecification(spec, uc);
4. If this is a large query, please consider sending a smaller
query (and query plan) that demonstrates the problem.
21. Are you running with Transactions? If so please provide any
transactions flags you specify with any API calls.
Yes. READ_UNCOMMITED in some and READ_COMMITTED in other transactions.
22. If your application is transactional, are your log files stored on
the same disk as your containers/databases?
Yes.
23. Do you use AUTO_COMMIT?
No.
24. Please list any non-transactional operations performed?
No.
25. How many threads of control are running? How many threads in read
only mode? How many threads are updating?
We use Berkeley XML DB within the context of a struts web application.
Each user logged into the web application will be running a bdb transactoin
within the context of a struts action thread.
26. Please include a paragraph describing the performance measurements
you have made. Please specifically list any Berkeley DB operations
where the performance is currently insufficient.
We are clocking 10-12 seconds of loading a document from dbd when
five users are on the system.
getContainer().getDocument(documentName);
27. What performance level do you hope to achieve?
We would like to get less than 5 seconds when 25 users are on the system.
28. Please send us the output of the following db_stat utility commands
after your application has been running under "normal" load for some
period of time:
% db_stat -h database environment -c
% db_stat -h database environment -l
% db_stat -h database environment -m
% db_stat -h database environment -r
% db_stat -h database environment -t
(These commands require the db_stat utility access a shared database
environment. If your application has a private environment, please
remove the DB_PRIVATE flag used when the environment is created, so
you can obtain these measurements. If removing the DB_PRIVATE flag
is not possible, let us know and we can discuss alternatives with
you.)
If your application has periods of "good" and "bad" performance,
please run the above list of commands several times, during both
good and bad periods, and additionally specify the -Z flags (so
the output of each command isn't cumulative).
When possible, please run basic system performance reporting tools
during the time you are measuring the application's performance.
For example, on UNIX systems, the vmstat and iostat utilities are
good choices.
Will give this information soon.
29. Are there any other significant applications running on this
system? Are you using Berkeley DB outside of Berkeley DB XML?
Please describe the application?
No to the first two questions.
The web application is an online review of test questions. The users
login and then review the items one by one. The relational database
holds the data in xml. During application load, the application
retrieves the xml and then saves it to bdb. While the user
is making changes to the data in the application, it writes those
changes to bdb. Finally when the user hits the SAVE button, the data
gets saved to the relational database. We also have an automatic save
every two minues, which saves bdb xml data and saves it to relational
database.
Thanks,
Madhav
[email protected]Could it be that you simply do not have set up indexes to support your query? If so, you could do some basic testing using the dbxml shell:
milu@colinux:~/xpg > dbxml -h ~/dbenv
Joined existing environment
dbxml> setverbose 7 2
dbxml> open tv.dbxml
dbxml> listIndexes
dbxml> query { collection()[//@date-tip]/*[@chID = ('ard','zdf')] (: example :) }
dbxml> queryplan { collection()[//@date-tip]/*[@chID = ('ard','zdf')] (: example :) }Verbosity will make the engine display some (rather cryptic) information on index usage. I can't remember where the output is explained; my feeling is that "V(...)" means the index is being used (which is good), but that observation may not be accurate. Note that some details in the setVerbose command could differ, as I'm using 2.4.16 while you're using 2.4.13.
Also, take a look at the query plan. You can post it here and some people will be able to diagnose it.
Michael Ludwig -
Poor performance with Oracle Spatial when spatial query invoked remotely
Is anyone aware of any problems with Oracle Spatial (10.2.0.4 with patches 6989483 and 7003151 on Red Hat Linux 4) which might explain why a spatial query (SDO_WITHIN_DISTANCE) would perform 20 times worse when it was invoked remotely from another computer (using SQLplus) vs. invoking the very same query from the database server itself (also using SQLplus)?
Does Oracle Spatial have any known problems with servers which use SAN disk storage? That is the primary difference between a server in which I see this poor performance and another server where the performance is fine.
Thank you in advance for any thoughts you might share.OK, that's clearer.
Are you sure it is the SQL inside the procedure that is causing the problem? To check, try extracting the SQL from inside the procedure and run it in SQLPLUS with
set autotrace on
set timing on
SELECT ....If the plans and performance are the same then it may be something inside the procedure itself.
Have you profiled the procedure? Here is an example of how to do it:
Prompt Firstly, create PL/SQL profiler table
@$ORACLE_HOME/rdbms/admin/proftab.sql
Prompt Secondly, use the profiler to gather stats on execution characteristics
DECLARE
l_run_num PLS_INTEGER := 1;
l_max_num PLS_INTEGER := 1;
v_geom mdsys.sdo_geometry := mdsys.sdo_geometry(2002,null,null,sdo_elem_info_array(1,2,1),sdo_ordinate_array(0,0,45,45,90,0,135,45,180,0,180,-45,45,-45,0,0));
BEGIN
dbms_output.put_line('Start Profiler Result = ' || DBMS_PROFILER.START_PROFILER(run_comment => 'PARALLEL PROFILE')); -- The comment name can be anything: here it is related to the Parallel procedure I am testing.
v_geom := Parallel(v_geom,10,0.05,1); -- Put your procedure call here
dbms_output.put_line('Stop Profiler Result = ' || DBMS_PROFILER.STOP_PROFILER );
END;
SHOW ERRORS
Prompt Finally, report activity
COLUMN runid FORMAT 99999
COLUMN run_comment FORMAT A40
SELECT runid || ',' || run_date || ',' || run_comment || ',' || run_total_time
FROM plsql_profiler_runs
ORDER BY runid;
COLUMN runid FORMAT 99999
COLUMN unit_number FORMAT 99999
COLUMN unit_type FORMAT A20
COLUMN unit_owner FORMAT A20
COLUMN text FORMAT A100
compute sum label 'Total_Time' of total_time on runid
break on runid skip 1
set linesize 200
SELECT u.runid || ',' ||
u.unit_name,
d.line#,
d.total_occur,
d.total_time,
text
FROM plsql_profiler_units u
JOIN plsql_profiler_data d ON u.runid = d.runid
AND
u.unit_number = d.unit_number
JOIN all_source als ON ( als.owner = 'CODESYS'
AND als.type = u.unit_type
AND als.name = u.unit_name
AND als.line = d.line# )
WHERE u.runid = (SELECT max(runid) FROM plsql_profiler_runs)
ORDER BY d.total_time desc;Run the profiler in both environments and see if you can see where the slowdown exists.
regards
Simon
Maybe you are looking for
-
SOAP Receiver Adapter / Scenario calling WebServices
Hi Experts, I am trying to call a web service. Configured a SOAP Receiver adapter. Message to Web Service goes fine but not shown with Checkered flag, but with icon with description "Log Version". In response message for the request Error Category -
-
I'm unable to copy text out of an ADE book.
Recently bought a new Windows PC and transferred all files to it from my old one. Installed ADE successfully, gave it my authorization ID and all is well. I'm able to open the book I'm interested in and browse all through it. But I'm unable to cop
-
Apps disabled on the Verizon Galaxy S6 and S6 Edge
Samsung Galaxy S6: Verizon, AT&T remove Microsoft apps | BGR I was curious about which features would be removed from the galaxy s6/ s6 edge on Verizon; Verizon has been known to remove features! (Which might have influenced a consumer's decision to
-
Crystal Reports Server :Unable to connect: incorrect log on parameters
Hi there, I am getting the following error in Crystal Reports Server when i try to preview or schedule a Crystal Report on an Access 2000 database. Unable to connect: incorrect log on parameters The Access database does not have has not username or
-
Convert From AAC to WMA for nano 16gb
I pay to get music from Kazaa which uses a WMA format, when I try to add it to my Itunes it says that the songs is from a WMA format and Itunes uses AAC format, how can I get this music to my Itunes to put on my I-pod 16gb