Variables in VO Query for Sql Server
I create my VO working with sqlServer:
Select fa_movimientos.invnum Secuencia,
fa_productos.codpro CodigoProducto,
fa_productos.despro DescripcionProducto,
fa_movimientos_detalle.qtypro CantidadCaja,
fa_movimientos.codprv codproveedor,
fa_proveedores.desprv DescripcionProveedor,
fa_movimientos.numdoc Documento,
fa_movimientos.tipkar tipkar,
fa_movimientos.fecmov FechaCompra,
fa_movimientos.totmov Totalmovimiento,
fa_movimientos_detalle.totpro TotalProducto,
fa_movimientos_detalle.qtypro CantidadEntero,
fa_movimientos_detalle.qtppro CantidadFraccion,
fa_movimientos_detalle.pvfsal pvfsal,
fa_movimientos_detalle.coscom CostoCompaxUnidad
from fa_movimientos, fa_movimientos_detalle, fa_productos, fa_proveedores
where fa_movimientos.tipkar ='CC'
and fa_movimientos.codprv = fa_proveedores.codprv
and fa_proveedores.codprv=:PV_CODPROV
adf can`t not recognize this expression--> NAME_OF_COLUMN=:BIND_VARIABLE(fa_movimientos.codprv = fa_proveedores.codprv )
the way of doing my query is like this:
WHERE fa_proveedores.codprv=?
i defined my variable PV_CODPRV and set bind position to 0. I m gona try if this works.
Similar Messages
-
Hi,
I'm trying to execute some SQL queries and I just don't understand what's wrong.
I�m using Tomcat and SQL Server in order to do this, but when I�m try to execute a query with a INNER JOIN statements Tomcat raise a SQL exception... at the very first time I thought there was a problem with database connection but I realize that a simple query to a table works pretty well. then I found out some problems with JDBC:ODBC.... so I install JDBC for SQL Server 2000 and test with the same simple query and works..... so, I come to a conclusion.... INNER JOIN or JOIN statements can't be used in JDBC..... please... somebody tell I�m wrong and give me a hand...
I'm using TOMCAT 4 and JDK 1.4 SQL Server 2000
Error occurs when executeQuery() is called.... not prepareStatement().... ??????
Driver DriverRecResult = (Driver)Class.forName(driver).newInstance();
Connection ConnRecResult = DriverManager.getConnection(DSN,user,password);
PreparedStatement StatementRecResult = ConnRecResult.prepareStatement(query);
ResultSet RecResult = StatementRecResult.executeQuery(); <---- Exception raise here
So much tahnks in advance,That's exactly what I think, driver it's raising the exception, but I don't know why.... i test the same query with INNER JOIN directly from SQL Query Analyser and it's works perfectly, my problem ain't SQL, but JSP and JDBC 'cause i'm a newbie about these issues.
Common sense tell me possible problems lie in SQLServer drivers 'cause i run the same pages on JRUN through jdbc:odbc and do works well, but by now i just depend on Tomcat.....
I've installed SQL Server drivers for JDBC but i just find it doesn't work fully... could be the version of JDK i've installed? what version do i need?
( I'm running Tomcat 4 with JDK 1.4 & SQL Server 2000 W2K )
thanks for reply. -
Outer join query for SQL server from Oracle
Hi All,
My question is regarding making queries from Oracle to SQL Server database thorugh DBLink.
In my oracle database I have a DBLink emp.world for SQL Server database.
I need to query SQL Server data from oracle (so that this query can be combined with other oracle tables).
Query is given below:
SELECT
a."EmpID" as "Employee ID",
a."EmpStatus" "Employee Status"
b."EmpSub" as "Employee Subjects"
FROM
[email protected] a
left outer join [email protected] b on a."EmpID" = b."suEmpID"
ORDER BY a."EmpID";My problem is when I run the same query from oracle, it does not show the EmpID that does not exist in Subjects table, but when run from actual SQL Server database, it shows all the records.
Samples are given below:
Run from Oracle
Employee ID Employee Status Employee Subjects
101 Active Maths
102 Active Maths
102 Active Physics
104 Inactive Chemistry
Run form SQL Server
Employee ID Employee Status Employee Subjects
101 Active Maths
102 Active Maths
102 Active Physics
103 Active NULL
104 Inactive ChemistryI am not sure why in oracle outer join for SQL server tables is not working. What is the right way for outer join in this case.
I am using oracle database 10gR2 and SQL Server 2005.
Please Help.
Thanks.SELECT
a."EmpID" as "Employee ID",
a."EmpStatus" "Employee Status"
b."EmpSub" as "Employee Subjects"
FROM
[email protected] a
left outer join [email protected] b on a."EmpID" = b."suEmpID"
ORDER BY a."EmpID";
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/queries006.htm#sthref3175
From your description, it appears you may need a right outer join. You want to get back all the rows from 'B', not from 'A'. Try a right join and let us know. -
I have used SSRS a lot years ago with Sql Server 2000 and Sql Server 2005. I have written external assemblies, ... But now I have to do this with Sql Server 2008 (R2 -- which I realize I am way behind the times already but ...) in sql server 2000 and
2005 there was a tab for datasource to the left of the tab for design which was to the left of the preview tab. How do I get to the datasource window in sql server 2008 (r2) ?
I see that datasource explorer. But where can I get to the datasource window to edit my queries and so forth for sql server 2008 (R2)?
Thanks
Rich PI think I found the answer to my question --- just right-click on the the Data Sources or Datasets for editing connections and dataset queries. I'm guessing it gets even fancier with Sql Svr 2012 - 2014. Man, that's the one thing
about coding platforms -- you let it go for a few years and come back, and everything has changed (well, a lot of things). Now I need to figure out how to add an external assembly to SSRS 2008 (R2).
Rich P -
After installed SP1 for SQL Server 2012, can no longer export to csv
After installing SP1 today via Windows Update, I am no longer able to export data to csv using the SQL Server Import and Export wizard. I get the following error message:
"Column information for the source and the destination data could not be retrieved, or the data types of source columns were not mapped correctly to those available on the destination provider."
"Column "col1": Source data type "200" was not found in the data type mapping file."...
(The above line repeats for each column)
The work-around I have to do is to manually map each column in the "Edit Mappings..." option from the "Configure Flat File Destination" page of the wizard. This is an extreme inconvenience to have to have to edit the mappings and change
each column to "string [DT_STR]" type from "byte stream [DT_BYTES]" type each time I want to export to csv. I did not have to do this before installing SP1; it worked perfectly for months with hundreds of exports prior to this update and
no need to modify mapping.I am running Windows 7 64-bit, SQL Server 2012 Express edition. Again, just yesterday from Windows Update, I installed SQL Server 2012 Service Pack 1 (KB2674319), followed by Update Rollup for SQL Server 2012 Service Pack 1 (KB2793634). This situation was
not occurring before these updates were installed, and I noticed it immediately after they were installed (and of course I restarted my computer after the updates).
In SSMS I just now created a test DB and table to provide a step-by-step with screenshots.
Here is the code I ran to create the test DB and table:
CREATE DATABASE testDB;
GO
USE testDB;
GO
CREATE TABLE testTable
id int,
lname varchar(50),
fname varchar(50),
address varchar(50),
city varchar(50),
state char(2),
dob date
GO
INSERT INTO testTable VALUES
(1,'Smith','Bob','123 Main St.','Los Angeles','CA','20080212'),
(2,'Doe','John','555 Rainbow Ln.','Chicago','IL','19580530'),
(3,'Jones','Jane','999 Somewhere Pl.','Washington','DC','19651201'),
(4,'Jackson','George','111 Hello Cir.','Dallas','TX','20010718');
GO
SELECT * FROM testTable;
Results look good:
id lname fname address city state dob
1 Smith Bob 123 Main St. Los Angeles CA 2008-02-12
2 Doe John 555 Rainbow Ln. Chicago IL 1958-05-30
3 Jones Jane 999 Somewhere Pl. Washington DC 1965-12-01
4 Jackson George 111 Hello Cir. Dallas TX 2001-07-18
In Object Explorer, I right-click on the [testDB] database, choose "Tasks", then "Export Data..." and the SQL Server Import and Export Wizard appears. I click Next to leave all settings as-is on the "Choose a Data Source" page, then on the "Choose a Destination"
page, under the "Destination" drop-down I choose "Flat File Destination" then browse to the desktop and name the file "table_export.csv" then click Next. On the "Specify Table Copy or Query" page I choose "Write a query to specify the data to transfer" then
click Next. I type the following SQL statement:
SELECT * FROM testTable;
When clicking the "Parse" button I get the message "This SQL statement is valid."
On to the next page, "Configure Flat File Destination" I try leaving the defaults then click Next. This is where I am getting the error message (see screenshot below):
Then going to the "Edit Mappings..." option on the "Configure Flat File Destination" page, I see that all columns which were defined as varchar in the table are showing as type "byte stream [DT_BYTES]", size "0", the state column which is defined as char(2)
shows correctly however with type "string [DT_STR]", size "2" (see screenshow below):
So what I have to do is change the type for the lname, fname, address and city columns to "string [DT_STR]", then I am able to proceed with the export successfully. Again, this just started happening after installing these updates. As you can imagine, this
is very frustrating, as I do a lot of exports from many tables, with a lot more columns than this test table.
Thanks for your help. -
Is this a bug in JDBC driver for SQL Server 2000?
Hi, all:
I encountered a strange problem today.
My struts/jsp application throws an error while executing the following SQL:
SELECT id,moduleName,pageLevel,ent_name,style_name,type,hits,startDate,expireDate,expireAction,active,dealtime FROM v_ad WHERE style_id=1 ORDER BY id DESC
The error message is:
[Microsoft][SQLServer 2000 Driver for JDBC][SQLServer]Invalid column name 'style_id'.
However, the column "style_id" does exist in view "v_ad", and if I execute this query in SQL Server Query Analyzer, it runs rightly with my expected result.
I wonder if this is a bug of JDBC, and if this is really a bug, how should I deal with this problem?
Any help would be thankful!Sounds like you may not have connected to the right database on the server.
Check the default database for the username/passwd combo you are logging in with. -
*****Error in Microsoft JDBC drivers for SQL Server 2000****
hi guys,
I am getting the following error in my application. The error seems to have thrown by Microsoft JDBC drivers for SQL Server 2000
The application tries to execute the the following query when the error is thorwn:
SELECT getDate(); // getDate is a function which returns currebt date time. The error is thrown occassionally. Other times the same query is executed correctly by the application.
Can any one help with this one.
The error is:
java.lang.NullPointerException
at com.microsoft.jdbc.base.BaseImplStaticCursorResultSet.setupTempFiles(Unknown Source)
at com.microsoft.jdbc.base.BaseImplStaticCursorResultSet.<init>(Unknown Source)
at com.microsoft.jdbc.base.BaseStatement.chainInServiceImplResultSets(Unknown Source)
at com.microsoft.jdbc.base.BaseStatement.getNextResultSet(Unknown Source)
at com.microsoft.jdbc.base.BaseStatement.commonGetNextResultSet(Unknown Source)
at com.microsoft.jdbc.base.BaseStatement.executeQueryInternal(Unknown Source)
at com.microsoft.jdbc.base.BaseStatement.executeQuery(Unknown Source)
at com.sanderson.tallyman.util.TallymanDB.executeQuery(Unknown Source)
at com.sanderson.tallyman.util.TallymanDB.getCurrentDate(Unknown Source)
at com.sanderson.tallyman.operations.interfaces.RecordUpdateControl.updateRecord(Unknown Source)
at com.sanderson.tallyman.operations.interfaces.DebtInterfaceControl.processUpdate(Unknown Source)
at com.sanderson.tallyman.operations.interfaces.DebtInterfaceControl.processInterface(Unknown Source)
at com.sanderson.tallyman.operations.interfaces.InterfaceHandler$ProcessRecord.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
rgds,Hi,
Did you ever get an answer to this? I am also having this problem. -
Permissions needed for sql server job to execute stored procedure on linked server?
Hi all
I have a job step which attempts to call a stored procedure on a linked server.
This step is failing with a permission denied error. How can I debug or resolve this?
The job owner is sysadmin on both servers so should have execute permission to the database/proc I'm calling, right?
The error is:
The EXECUTE permission was denied on the object 'myProc', database 'myDatabase', schema 'dbo'. [SQLSTATE 42000] (Error 229). The step failed.
My code is:
EXEC [LinkedServer].myDatabase.dbo.myProc
Also tried:
SELECT * FROM OPENQUERY([LinkedServer], 'SET FMTONLY OFF EXEC myDatabase.dbo.myProc')
With the same result.
Any help appreciated.The job owner may be sysadmin on the remote server. The service account for SQL Server Agent may not. And it is the latter that counts, since the it the service accounts that logs in and impersonates the job owner. But the impersonation inside SQL Server
does not count much in Windows, and it is through Windows connection is made to the other site.
One way to resolve this is to set up a login mapping for the job owner. The login mapping must be for an SQL login on the remote server.
You can verify the theory, but running this query from the job:
SELECT * FROM OPENQUERY([LinkedServer], 'SELECT SYSTEM_USER')
By the way, putting SET FMTONLY OFF in OPENQUERY is a terrible idea. This has the effect that the procedure is executed twice. (Unless both servers are SQL 2012 or higher in which case FMTONLY has no effect at all.)
Erland Sommarskog, SQL Server MVP, [email protected] -
Download for SQL Server machine running Windows Server 2008 Enterprise SP2
I am confused what 11g product to download and how to configure for the above server. I just need for SQL Server 2008 to be able to communicate with some Oracle databases. I don't need to manage the Oracle database, nor do I need to create one. I just need to be able to create linked servers, use SQL Plus, and create some SSIS or DTS packages that will bring Oracle data into the SQL Server. I think I just need the client, but for 11g, I see it is less than 1GB in size. I am afraid I may be missing something because I thought the client downloads were about 2GB, but the 2GB downloads are not called "client". This is a 64bit machine running an Intel Xeon X7460 chip.
Our server has a C:\, D:\, and E:\ drive on it. The data (.mdf) is on drive E:\. The normal environment is on C:\ like usual. I also need to know if I need to set environment variables which I read about, but am not sure how to set them and make them stay, or if the software does that.
As an aside, I tried installing a 10g on there that the DBA downloaded and could never get it to work, although I was successful in getting tnsping to work, but SQLPlusW would close on me when I tried to connect and SQLPlus failed with some error but can't remember what right now. I uninstalled it. I think it may have been the wrong download.
I would appreciate any info at all, even if you don't know about everything said here.
I have a link here where I think I am close, but I still am not sure. Let me know if this is correct and which link to click. If I am wrong, please provide the correct link if you can. Thank you.
[http://www.oracle.com/technetwork/database/enterprise-edition/downloads/112010-win64soft-094461.html]
Edited by: user486992 on Oct 20, 2010 11:07 AMHi,
To integrate Oracle with SQL server you need the following
Install Oracle 11g Release 2 client software
Install Oracle 11g Release 2 ODAC software
Restart SQL services
Configure OraOLEDB.Oracle provider
Create linked server
Add remote logins for linked server
see this links
http://www.mssqltips.com/tip.asp?tip=1433
http://www.easysoft.com/applications/oracle/database-gateway-dg4odbc.html :) -
OGG for SQL Server - Extract stops capturing - Bug?
Hi, all,
I've found a problem with OGG for SQL Server where the Extract stops capturing data after the transaction log is backed up. I've looked for ways to reconfigure OGG to avoid the problem but couldn't find any reference to options to workaround this problem. It seems to be a bug to me.
My Extract configuration is as follows:
EXTRACT ext1
SOURCEDB mssql1
TRANLOGOPTIONS NOMANAGESECONDARYTRUNCATIONPOINT
EOFDELAY 60
EXTTRAIL dirdat/e1
TABLE dbo.TestTable;
I'm using the EOFDELAY parameter for testing purposes only, since it's easy to reproduce the scenario that causes the issue when the extract polling is configured with longer intervals.
When the Transaction Log backup runs, SQL Server marks all the virtual logs that are older than the primary and secondary truncation points as inactive (status = 0). These virtual logs can then be reused if required. They still contain change records, though, and OGG can read from then if required, before they are overwritten. This situation will never occur if we are not using SQL Replication and have the Extract configured with the parameter MANAGESECONDARYTRUNCATIONPOINT.
However, I'm trying to simulate a scenario where OGG is used along SQL Replication and the extract is configured with the NOMANAGESECONDARYTRUNCATIONPOINT option. The situation that I've reproduced and caused the Extract to stop capturing is the follow sequence of events:
1. Extract reads transaction log and capture change up to LSN X
2. More change are made to the database and the LSN is incremented
3. Log Reader reads Transaction Log, captures changes up to LSN X+Y and advances the secondary truncation point to that LSN
4. A transaction log occurs, backs up all the active virtual logs, advances the primary truncation point to a LSN greater than LSN X+Y, and marks all the virtual logs with LSNs <= X+Y as inactive (status = 0)
5. Changes continue to happen in the database consuming all the available inactive virtual logs and overwriting them.
6. The extract wakes up again to capture more changes.
At this point, the changes between LSNs X and X+Y are not in the Transaction Log anymore, but are available in the backups. From what I understood in the documentation the Extract should detect that situation and retrieve the changes from the Transaction Log backups. This, however, is not happening and the Extract becomes stuck. It still pools the transaction log at the configured interval query the log state with DBCC LOGINFO, but doesn't move forward anymore.
If I stop and restart the Extract I can see from the trace that it does the right thing upon startup. It realises that it requires information that's missing from the logs, query MSDB for the available backups, and mine the backups to get the required LSNs.
I would've thought the Extract should do the same during normal operation, without the need for a restart.
Is this a bug or the normal operation of the Extract? Is there a way to configure it to avoid this situation without using NOMANAGESECONDARYTRUNCATIONPOINT?
The following is the state of the Extract once it gets stuck. The last replicated change occurred at 2012-07-09 12:46:50.370000. All the changes after that, and there are many, were not captured until I restarted the Extract.
GGSCI> info extract ext1, showch
EXTRACT EXT1 Last Started 2012-07-09 12:32 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:54 ago)
VAM Read Checkpoint 2012-07-09 12:46:50.370000
LSN: 0x0000073d:00000aff:0001, Tran: 0000:000bd922
Current Checkpoint Detail:
Read Checkpoint #1
VAM External Interface
Startup Checkpoint (starting position in the data source):
Timestamp: 2012-07-09 11:41:06.036666
LSN: 0x00000460:00000198:0004, Tran: 0000:00089b02
Recovery Checkpoint (position of oldest unprocessed transaction in the data so
urce):
Timestamp: 2012-07-09 12:46:50.370000
LSN: 0x0000073d:00000afd:0004, Tran: 0000:000bd921
Current Checkpoint (position of last record read in the data source):
Timestamp: 2012-07-09 12:46:50.370000
LSN: 0x0000073d:00000aff:0001, Tran: 0000:000bd922
Write Checkpoint #1
GGS Log Trail
Current Checkpoint (current write position):
Sequence #: 14
RBA: 28531192
Timestamp: 2012-07-09 12:50:02.409000
Extract Trail: dirdat/e1
CSN state information:
CRC: D2-B6-9F-B0
CSN: Not available
Header:
Version = 2
Record Source = A
Type = 8
# Input Checkpoints = 1
# Output Checkpoints = 1
File Information:
Block Size = 2048
Max Blocks = 100
Record Length = 20480
Current Offset = 0
Configuration:
Data Source = 5
Transaction Integrity = 1
Task Type = 0
Status:
Start Time = 2012-07-09 12:32:29
Last Update Time = 2012-07-09 12:50:02
Stop Status = A
Last Result = 400
Thanks!
AndreIt might be something simple (or maybe not); but the best/fastest way to troubleshoot this would be to have Oracle (GoldenGate) support review your configuration. There are a number of critical steps required to allow GG to interoperate with MS's capture API. (I doubt this is it, but is your TranLogOptions on one line? It looks like you have it on two , the way it's formatted here.)
Anyway, GG support has seen it all, and can probably wrap this up quickly. (And if it was something simple -- or even a bug -- do post back here & maybe someone else can benefit from the solution.)
Perhaps someone else will be able to provide a better answer, but for the most part troubleshooting this (ie, sql server) via forum tends to be a bit like doing brain surgery blindfolded. -
Hi All,
When I query the SQL server table from Oracle using DB Link, it works fine for any table:
select * from testtable@DBLINK test -- This statement works fine because I am giving * i.e. all columns
But when I try to query specific columns like
Select test.status from testtable@DBLINK test
then it gives me error ORA-00904: "test.Status invalid Identifier", I can see this particular column when I query the testtable for all of the columns.
I don't know what to do.
ThanksYou need to use quoted names. SQL Server data dictionary stores names in same case they were entered while Oracle in upper case. So when you issue
Select test.status from testtable@DBLINK test Oracle parser will look for column STATUS while on SQL Server side it could be stored, for examle, as Status or status. Check column names on SQL Server side and use quoted names. Assuming column name is Status:
Select test."Status" from testtable@DBLINK test SY. -
Error deploying JDBC driver for SQL Server 2005
Hi all
I'm trying to deploy a JDBC driver for MS SQL Server 2005 (downloaded [here|http://www.microsoft.com/downloads/details.aspx?familyid=C47053EB-3B64-4794-950D-81E1EC91C1BA&displaylang=en]). When I try to deploy it according to the instructions found [here|http://help.sap.com/saphelp_nwce10/helpdata/en/51/735d4217139041e10000000a1550b0/frameset.htm] it fails.
The error in the logs doesn't give any useful information though. It only says Error occurred while deploying component ".\temp\dbpool\MSSQL2005.sda".
Has anyone else deployed a driver for SQL Server 2005, or perhaps have any suggestions of what else I could try?
Thanks
StuartHi Vladimir
That's excellent news! Thanks for the effort you've put into this. I'm very impressed with how seriously these issues are dealt with, specifically within the Java EE aspects of SAP.
May I make two suggestions on this topic:
1. Given that NWA has such granular security permissions, please could the security error be shown when it is raised? This would help immediately identify that the problem isn't actually a product error, but rather a missing security permission (and thus save us time and reduce your support calls).
2. Please could the role permissions be clearly documented (perhaps they already are, and I just couldn't find the docs?) so we know what is and isn't included in the role. The name is very misleading, as a "superadmin" is generally understood to have no limitation on their rights - so clear documentation on what is in-/excluded would be most helpful.
On a related topic, I came across another issue like this that may warrant your attention (while you're already looking into NWA security issues). I logged a support query about it (ref: 0120025231 0000753421 2008) in case you can retrieve details there (screenshots, logs, etc.). It's basically a similar security constraint when trying to create a Destination. I'm not sure if this is something you would like to include as standard permissions within the NWA_SUPERADMIN role or not, but I think it's worth consideration.
Thanks again for your help!
Cheers
Stuart -
Converting MDX query into SQL Server 2008
We have MDX query based OLAP CUbes.
To reduce MDX based dependenies, we have to convert MDX based cube to sql server 2008 based queries.
For this I need expert advise to convert below query to sql server 2008 based query :
CREATE MEMBER CURRENTCUBE.Measures.[Ack Lost]
AS 'Sum(PeriodsToDate([Time].[Year], [Time].CurrentMember ), [Measures].[Lost])',
FORMAT_STRING = "#,#",
VISIBLE = 1;Hi Sachin,
According to your description, you need to convert the MDX query to T-SQL query, right?
Your MDX query is calculated measure that return the total value from the first sibling and ending with the given member. In T-SQL query we can use the query as Yogisha provided to achieve the same requirement. Now you need to a tool to convert all the MDX
related SSAS cube queries to MS SQL Server 2008 based queries.
Although MDX has some similarities with T-SQL these languages are in many ways different. Beginning to learn and comprehend SQL Server Analysis Services (SSAS) MDX queries can be difficult after one has spent years writing queries in T-SQL. Currently, there
is no such a tool to convert a MDX query to T-SQL query since the structure difference between MDX and T-SQL. So you need to convert them manually, please refer to the links below to see the details.
http://www.mssqltips.com/sqlservertip/2916/comparison-of-queries-written-in-tsql-and-sql-server-mdx/
https://sqlmate.wordpress.com/2013/11/12/t-sql-vs-mdx-2/
http://technet.microsoft.com/en-us/library/aa216779(v=sql.80).aspx
Regards,
Charlie Liao
TechNet Community Support -
Get definition of an index from query in SQL Server
Hi,
I need to get definition of an index using query in SQL Server 2008 as we can get definition of a stored procedure using sp_helptext command.
Thanks In advance,
JiteshI have worked on the script and updated the script as per my need. Now I am able to generate the script for a specific index. Here is the script:-
CREATE PROCEDURE ScriptCreateDropIndexes_SP
@TableName SYSNAME
,@SchemaName SYSNAME = 'dbo'
,@SORT_IN_TEMPDB VARCHAR(3) = 'OFF'
,@DROP_EXISTING VARCHAR(3) = 'OFF'
,@STATISTICS_NORECOMPUTE VARCHAR(3) = 'OFF'
,@ONLINE VARCHAR(3) = 'OFF'
,@Index_Name NVARCHAR(1000)
,@is_Create int = 0
AS
Parameters
@Schemaname - SchemaName to which the table belongs to. Default value 'dbo'.
@Tablename - TableName for which the Indexes need to be scripted.
@SORT_IN_TEMPDB - Runtime value for SORT_IN_TEMPDB option in create index.
Valid Values ON \ OFF. Default = 'OFF'
@DROP_EXISTING - Runtime value for DROP_EXISTING option in create index.
Valid Values ON \ OFF. Default = 'OFF'
@STATISTICS_NORECOMPUTE - Runtime value for STATISTICS_NORECOMPUTE option in create index.
Valid Values ON \ OFF. Default = 'OFF'
@ONLINE - Runtime value for ONLINE option in create index.
Valid Values ON \ OFF. Default = 'OFF'
SET NOCOUNT ON
IF @SORT_IN_TEMPDB NOT IN ('ON','OFF')
BEGIN
RAISERROR('Valid value for @SORT_IN_TEMPDB is ON \ OFF',16,1)
RETURN
END
IF @DROP_EXISTING NOT IN ('ON','OFF')
BEGIN
RAISERROR('Valid value for @DROP_EXISTING is ON \ OFF',16,1)
RETURN
END
IF @STATISTICS_NORECOMPUTE NOT IN ('ON','OFF')
BEGIN
RAISERROR('Valid value for @STATISTICS_NORECOMPUTE is ON \ OFF',16,1)
RETURN
END
IF @ONLINE NOT IN ('ON','OFF')
BEGIN
RAISERROR('Valid value for @ONLINE is ON \ OFF',16,1)
RETURN
END
DECLARE @IDXTable TABLE
Schema_ID INT
,Object_ID INT
,Index_ID INT
,SchemaName SYSNAME
,TableName SYSNAME
,IndexName SYSNAME
,IsPrimaryKey BIT
,IndexType INT
,CreateScript VARCHAR(MAX) NULL
,DropScript VARCHAR(MAX) NULL
,ExistsScript VARCHAR(MAX) NULL
,Processed BIT NULL
INSERT INTO @IDXTable
Schema_ID
,Object_ID
,Index_ID
,SchemaName
,TableName
,IndexName
,IsPrimaryKey
,IndexType
SELECT ST.Schema_id
,ST.Object_id
,SI.Index_id
,SCH.Name
,ST.Name
,SI.Name
,SI.is_primary_key
,SI.Type
FROM SYS.INDEXES SI
JOIN SYS.TABLES ST
ON SI.Object_ID = ST.Object_ID
JOIN SYS.SCHEMAS SCH
ON SCH.schema_id = ST.schema_id
WHERE SCH.Name = 'dbo'
AND ST.Name = 'group_master'
AND SI.name = 'uq_group_master__parent_id'
AND SI.Type IN (1,2,3)
DECLARE @SchemaID INT
DECLARE @TableID INT
DECLARE @IndexID INT
DECLARE @isPrimaryKey BIT
DECLARE @IndexType INT
DECLARE @CreateSQL VARCHAR(MAX)
DECLARE @IndexColsSQL VARCHAR(MAX)
DECLARE @WithSQL VARCHAR(MAX)
DECLARE @IncludeSQL VARCHAR(MAX)
DECLARE @WhereSQL VARCHAR(MAX)
DECLARE @SQL VARCHAR(MAX)
DECLARE @DropSQL VARCHAR(MAX)
DECLARE @ExistsSQL VARCHAR(MAX)
DECLARE @IndexName SYSNAME
DECLARE @TblSchemaName SYSNAME
SELECT @TblSchemaName = QUOTENAME(@Schemaname) + '.' + QUOTENAME(@TableName)
SELECT @CreateSQL = ''
SELECT @IndexColsSQL = ''
SELECT @WithSQL = ''
SELECT @IncludeSQL = ''
SELECT @WhereSQL = ''
WHILE EXISTS(SELECT 1
FROM @IDXTable
WHERE CreateScript IS NULL)
BEGIN
SELECT TOP 1 @SchemaID = Schema_ID
,@TableID = Object_ID
,@IndexID = Index_ID
,@isPrimaryKey = IsPrimaryKey
,@IndexName = IndexName
,@IndexType = IndexType
FROM @IDXTable
WHERE CreateScript IS NULL
AND SchemaName = @SchemaName
AND TableName = @TableName
ORDER BY Index_ID
IF @isPrimaryKey = 1
BEGIN
SELECT @ExistsSQL = ' EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N''' + @TblSchemaName + ''') AND name = N''' + @IndexName + ''')'
SELECT @DropSQL = ' ALTER TABLE '+ @TblSchemaName + ' DROP CONSTRAINT [' + @IndexName + ']'
END
ELSE
BEGIN
SELECT @ExistsSQL = ' EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N''' + @TblSchemaName + ''') AND name = N''' + @IndexName + ''')'
SELECT @DropSQL = ' DROP INDEX [' + @IndexName + '] ON ' + @TblSchemaName
END
IF @IndexType IN (1,2)
BEGIN
SELECT @CreateSQL = CASE
WHEN SI.is_Primary_Key = 1 THEN
'ALTER TABLE ' + @TblSchemaName
+ ' ADD CONSTRAINT [' + @IndexName + '] PRIMARY KEY ' + SI.type_desc
WHEN SI.Type IN (1,2) THEN
' CREATE ' + CASE SI.is_Unique
WHEN 1 THEN ' UNIQUE ' ELSE '' END + SI.type_desc + ' INDEX ' + QUOTENAME(SI.Name) + ' ON ' + @TblSchemaName
END
,@IndexColsSQL = ( SELECT SC.Name + ' '
+ CASE SIC.is_descending_key
WHEN 0 THEN ' ASC '
ELSE 'DESC'
END + ','
FROM SYS.INDEX_COLUMNS SIC
JOIN SYS.COLUMNS SC
ON SIC.Object_ID = SC.Object_ID
AND SIC.Column_ID = SC.Column_ID
WHERE SIC.OBJECT_ID = SI.Object_ID
AND SIC.Index_ID = SI.Index_ID
AND SIC.is_included_column = 0
ORDER BY SIC.Key_Ordinal
FOR XML PATH('')
,@WithSQL =' WITH (PAD_INDEX = ' + CASE SI.is_padded WHEN 1 THEN 'ON' ELSE 'OFF' END + ',' + CHAR(13) +
' IGNORE_DUP_KEY = ' + CASE SI.ignore_dup_key WHEN 1
THEN 'ON' ELSE 'OFF' END + ',' + CHAR(13) +
' ALLOW_ROW_LOCKS = ' + CASE SI.Allow_Row_Locks WHEN
1 THEN 'ON' ELSE 'OFF' END + ',' + CHAR(13) +
' ALLOW_PAGE_LOCKS = ' + CASE SI.Allow_Page_Locks WHEN
1 THEN 'ON' ELSE 'OFF' END + ',' + CHAR(13) +
CASE SI.Type WHEN 2 THEN 'SORT_IN_TEMPDB = ' + @SORT_IN_TEMPDB
+',DROP_EXISTING = ' + @DROP_EXISTING + ',' ELSE '' END +
CASE WHEN SI.Fill_Factor > 0 THEN ' FILLFACTOR =
' + CONVERT(VARCHAR(3),SI.Fill_Factor) + ',' ELSE '' END +
' STATISTICS_NORECOMPUTE = ' + @STATISTICS_NORECOMPUTE
+ ', SORT_IN_TEMPDB = ' + @SORT_IN_TEMPDB +
', ONLINE = ' + @ONLINE + ') ON ' + QUOTENAME(SFG.Name)
,@IncludeSQL = ( SELECT QUOTENAME(SC.Name) + ','
FROM SYS.INDEX_COLUMNS SIC
JOIN SYS.COLUMNS SC
ON SIC.Object_ID = SC.Object_ID
AND SIC.Column_ID = SC.Column_ID
WHERE SIC.OBJECT_ID
= SI.Object_ID
AND SIC.Index_ID = SI.Index_ID
AND SIC.is_included_column = 1
ORDER BY SIC.Key_Ordinal
FOR
XML PATH('')
,@WhereSQL = SI.Filter_Definition
FROM SYS.Indexes SI
JOIN SYS.FileGroups SFG
ON SI.Data_Space_ID =SFG.Data_Space_ID
WHERE Object_ID = @TableID
AND Index_ID = @IndexID
SELECT @IndexColsSQL = '(' + SUBSTRING(@IndexColsSQL,1,LEN(@IndexColsSQL)-1) + ')'
IF LTRIM(RTRIM(@IncludeSQL)) <> ''
SELECT @IncludeSQL = ' INCLUDE (' + SUBSTRING(@IncludeSQL,1,LEN(@IncludeSQL)-1) + ')'
IF LTRIM(RTRIM(@WhereSQL)) <> ''
SELECT @WhereSQL = ' WHERE (' + @WhereSQL + ')'
END
IF @IndexType = 3
BEGIN
SELECT @CreateSQL = ' CREATE ' + CASE
WHEN SI.Using_xml_index_id IS NULL THEN ' PRIMARY '
ELSE '' END
+ SI.type_desc + ' INDEX ' + QUOTENAME(SI.Name) + ' ON ' + @TblSchemaName
,@IndexColsSQL = ( SELECT SC.Name + ' '
+ ','
FROM SYS.INDEX_COLUMNS SIC
JOIN SYS.COLUMNS SC
ON SIC.Object_ID = SC.Object_ID
AND SIC.Column_ID = SC.Column_ID
WHERE SIC.OBJECT_ID = SI.Object_ID
AND SIC.Index_ID = SI.Index_ID
AND SIC.is_included_column = 0
ORDER BY SIC.Key_Ordinal
FOR XML PATH('')
,@WithSQL =' WITH (PAD_INDEX = ' + CASE SI.is_padded WHEN 1 THEN 'ON' ELSE 'OFF' END + ',' + CHAR(13) +
' ALLOW_ROW_LOCKS = ' + CASE SI.Allow_Row_Locks WHEN
1 THEN 'ON' ELSE 'OFF' END + ',' + CHAR(13) +
' ALLOW_PAGE_LOCKS = ' + CASE SI.Allow_Page_Locks WHEN
1 THEN 'ON' ELSE 'OFF' END + ',' + CHAR(13) +
CASE SI.Type WHEN 2 THEN 'SORT_IN_TEMPDB = OFF,DROP_EXISTING
= OFF,' ELSE '' END +
CASE WHEN SI.Fill_Factor > 0 THEN ' FILLFACTOR =
' + CONVERT(VARCHAR(3),SI.Fill_Factor) + ',' ELSE '' END +
'SORT_IN_TEMPDB = OFF ' + ') '
,@IncludeSQL = ' USING XML INDEX [' + (SELECT Name
FROM SYS.XML_Indexes SIP
WHERE SIP.Object_ID = SI.Object_ID
AND SIP.Index_ID = SI.Using_XML_Index_ID) + '] FOR PATH '
FROM SYS.XML_Indexes SI
WHERE SI.Object_ID = @TableID
AND SI.Index_ID = @IndexID
SELECT @IndexColsSQL = '(' + SUBSTRING(@IndexColsSQL,1,LEN(@IndexColsSQL)-1) + ')'
END
SELECT @CreateSQL = @CreateSQL
+ @IndexColsSQL + CASE WHEN @IndexColsSQL <> '' THEN CHAR(13) ELSE ''
END
+ ISNULL(@IncludeSQL,'') + CASE WHEN @IncludeSQL <> '' THEN CHAR(13) ELSE
'' END
+ ISNULL(@WhereSQL,'') + CASE WHEN @WhereSQL <> '' THEN CHAR(13) ELSE
'' END
--+ @WithSQL
UPDATE @IDXTable
SET CreateScript = @CreateSQL
,DropScript = @DropSQL
,ExistsScript = @ExistsSQL
WHERE Schema_ID = @SchemaID
AND Object_ID = @TableID
AND Index_ID = @IndexID
END
-- PRINT REPLICATE('-',100)
-- PRINT 'DROP Indexes'
-- PRINT REPLICATE('-',100)
if @is_Create = 0
begin
UPDATE @IDXTable
SET Processed = 0
WHERE SchemaName = @SchemaName
AND TableName = @TableName
WHILE EXISTS(SELECT 1
FROM @IDXTable
WHERE ISNULL(Processed,0) = 0
AND SchemaName = @SchemaName
AND TableName = @TableName )
BEGIN
SELECT @SQL = ''
SELECT TOP 1 @SchemaID = Schema_ID
,@TableID = Object_ID
,@IndexID = Index_ID
,@SQL = 'IF ' + ExistsScript + CHAR(13) + DropScript + CHAR(13)
FROM @IDXTable
WHERE ISNULL(Processed,0) = 0
AND SchemaName = @SchemaName
AND TableName = @TableName
ORDER BY IndexType DESC,Index_id DESC
PRINT @sql
UPDATE @IDXTable
SET Processed = 1
WHERE Schema_ID = @SchemaID
AND Object_ID = @TableID
AND Index_ID = @IndexID
END
end
--PRINT REPLICATE('-',100)
--PRINT 'Create Indexes'
-- PRINT REPLICATE('-',100)
if @is_Create = 1
begin
UPDATE @IDXTable
SET Processed = 0
WHERE SchemaName = @SchemaName
AND TableName = @TableName
WHILE EXISTS(SELECT 1
FROM @IDXTable
WHERE ISNULL(Processed,0) = 0
AND SchemaName = @SchemaName
AND TableName = @TableName )
BEGIN
SELECT @SQL = ''
SELECT TOP 1 @SchemaID = Schema_ID
,@TableID = Object_ID
,@IndexID = Index_ID
,@SQL = 'IF NOT ' + ExistsScript + CHAR(13) + CreateScript + CHAR(13)
FROM @IDXTable
WHERE ISNULL(Processed,0) = 0
AND SchemaName = @SchemaName
AND TableName = @TableName
ORDER BY IndexType DESC,Index_id DESC
PRINT @sql
UPDATE @IDXTable
SET Processed = 1
WHERE Schema_ID = @SchemaID
AND Object_ID = @TableID
AND Index_ID = @IndexID
END
end -
SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft OLE DB Provider for SQL Server 2012 Analysis Services." Hresult: 0x80004005
Description: "Internal error: An unexpected error occurred (file 'pcxmlacommon.cpp', line 43, function 'PCFault::RaiseError').
I'm getting above error in the pre- execute phase of a DFT when I'm trying to fetch data from a SSAS cube using mdx query.
I'm using OLE DB provider for connecting to cube.
I got one resolution for Error Code 0X8004005, which asked me to add 'Format= Tabular' in cube's connection string. It does not seem to work either. Can any one help me out on this???You are probably missing an update.
I saw a MS Connect post https://connect.microsoft.com/SQLServer/feedback/details/250920/error-using-oledb-or-datareader-to-get-analysis-services-data where that suggestion was proposed as a fix, but is for an older SQL Server version.
So do there is a question if you pull data from SQL Server 2012 SSAS using SSIS 2012 (so no other build is involved).
Arthur My Blog
Maybe you are looking for
-
In TechTools ( came with Applecare ) i got a failed on Directory Structure, second time since owning my couple month old machine, first time apple recomended reloading. i dont mishandle it, dont bump, dent, etc, the laptop. but all of a sudden someti
-
Restriction in Rows Selection not working
Hello, I have a requrement where i need to restrict the material based on the date output is Sno Material date quantity 1 Material > 90 days 5/7/2012 20 material wi
-
Exporting pdf form data to a web server via Submit Button
Hi I am working on a project where a pdf form is dynamically filled by user information. So, when the user click Print Application on the web, it will create a pdf with form data and download to the user computer. After that, users will have to add m
-
i am having a problem with my all in one deskjet wireless connection it says wps time out . I have vista it is makeing me crazy
-
Dear All, In my company we are driving Asset Accounting through WBS Element. At the time of final settlement to asset in CJ88, ideally it should take the Goods inward date as the Asset capitalization date, but its happening. Is there any customizatio