Sbdadm create-lu results in sbdadm: data file error
Hello,
I am trying to create a new thin-provisioning LUN by issueing the following:
sbdadm --disk-size 100M /export/data.lun
This yields this error message: sbdadm: data file error
I have tried that on two other Solaris Express 11 installations and I seem to cannot get it to work. I have been able to import my previous COMSTAR config and I am able to access my existing iSCSI targets just fine, but I cannot create a new one.
I actually haven't tried to run sbdadm against a real device or a zvol though.
Anybody has experienced this?
Thanks,
Stephan
I've never really used the sbdadm command, but your example seems to be missing a command keyword, such as 'create-lu', or similiar.
sbdadm create-lu --disk-size 100M /export/data.lun
.. or did you just forgot 'create-lu' in your post?
.7/M.
Similar Messages
-
I have an iMac with 2 internal drives and a multiple user account set-up. Hw do I create a path to store data files on the second drive within an application?
This is the Mac mini forum not the iMac forum however...
Applications written for average users like Photoshop, Word, i.e. GUI based applications provide a 'Save' dialog box which while allow selecting second drives or any drive. The dialob box initially shown might be in the simple mode but you just need to click on the triangle to show the full set of options. You should then see the different drive names amongst other options.
If your referring to an application your writing yourself then you need to build a pathname. This can be in one of two styles depending on the programming system your using. This could be a POSIX style path or a Mac style path.
POSIX = /Volumes/volname/foldername
Mac style = Volname:foldername: -
PST is not an Outlook data file error coming while opening the Outlook
Hi,
Suddenly my outlook is showing the .pst file is not an Outlook data file error while starting. No big changes was made to the system to cause this. I had upgraded the IE version to 9 the same day. Done lots of trouble shooting but still not able fix my pst
file to work. My all mails are in that pst and there is no backup for it.
I am using Office Outlook 2007. I have already tried the following to make it work bu failed each time.
1. Tried to repair the .pst file using scanpst tool in the system, but closed due to an unexpected error at the Phase 6.
2. Successfully completed the repair using the scanpst of another Outlook 2010 system and added the pst back to my systems Outlook but it had only an Inbox folder in it which was empty.
3. Tried to repair the using other third party software's but none came successful.
Please let me know anything can be done for to make my old pst work again.
Sanjay
Avatar of Sanjay Oyitty Sanjay OyittyHi,
What's the size of that .pst file?
Right click on the file and click Properties, make sure there's no
Read-only Attribute. Then we can run Scanpst.exe again to check the result again:
http://support.microsoft.com/kb/272227
Regards,
Melon Chen
TechNet Community Support -
Hi,
I have RDS 2012 session deployment in Azure with connection broker high availability.
The "Remote Desktop Management" service does not start automatically when the connection broker virtual machines are stopped and started.
I see the below error in event logs of both the connection broker VMs
Note: WHen i manually start the "Remote Desktop Management" service after this error, it all works without issues.
I get
Error ID 46 - Crash dump initialization failed!
Warning 10154 - in Microsoft-Windows-Windows Remote Management
The WinRM service failed to create the following SPNs:
Additional Data
The error received was 1355Hi,
Thank you for posting in Windows Server Forum.
In respect to error 46, this issue may occur if the computer boots without a configured dump file. The default dump file is the pagefile. During a clean Windows OS installation, the very first boot will hit this condition as the pagefile has not been set up
yet.
To resolve this issue, you may want complete the paging file configuration.
More information:
Event ID 46 logged when you start a computer
http://support.microsoft.com/kb/2756313/EN-US
In regards to error 10154, you need to create the SPN specified in the event using the
setspn.exe utility and also need to grant the “Validated Write to Service Principal Name” permission to the NETWORK SERVICE.
For more information refer beneath articles.
Event ID 10154 — Configuration
http://technet.microsoft.com/en-us/library/dd348559(v=ws.10).aspx
Domain Controllers Warning Event ID: 10154
http://srvcore.wordpress.com/2010/01/02/domain-controllers-warning-event-id-10154/
Hope it helps!
Thanks,
Dharmesh -
OraRRP Error with "Unable to copy data file;Error code 2, check disk space"
Hi,
Some users get this message -"Unable to copy data file;Error code 2, check disk space" when run report with orarrp, but most users do not get it.
I check free space at both server and client side, they are very sufficient.
I also checked directory exists for REPORTXX_TMP variable.
My user call reports via URL (rwservlet) and it occur for all reports.
How I can solve this problem?
Thanks in advance.
Tawatchai R.Hi,
have the same problem now. One user has temporarily problems to download .rrpa files via URL (rwservlet) request. Error code: -"Unable to copy data file;Error code 2, check disk space". Did you get a solution??
Thanks in advance. Axel -
HELP ME! Creating a table from a data file
Hi
I'm writing an application for data visualization. The user can press the "open file" button and a FileChooser window will come up where the user can select any data file. I would like to take that data file and display it as a table with rows and columns. The user needs to be able to select the coliumns to create a graph. I have tried many ways to create a table, but nothing seems to work! Can anyone help me?! I just want to read from the data file and create a spreadsheet type table... I won't know how many rows and columns I'll need in advance, so the table needs to be dynamic!
If you have ANY tips, I'd REALLY appreciated.....I won't know how many rows and columns I'll need in advance, so the table needs to be dynamic!You may use a List (ArrayList, LinkedList or Vector) for that.
Lists allow you (dynamically) add elements. -
ORA-01157: cannot identify/lock data file error in standby database.
Hi,
i have a primary database and standby database (11.2.0.1.0) running in ASM with different diskgroup names. I applied an incremental backup on standby database to resolve archive log gap and generated a controlfile for standby in primary database and restored the controlfile in standby database.But when i started the MRP process its not starting and thows error in alert log ORA-01157: cannot identify/lock data file. When i queried the standby database file it shows the location on primary database datafiles names not the standby database.
PRIMARY DATABASE
SQL> select name from v$datafile;
NAME
+DATA/oradb/datafile/system.256.788911005
+DATA/oradb/datafile/sysaux.257.788911005
+DATA/oradb/datafile/undotbs1.258.788911005
+DATA/oradb/datafile/users.259.788911005
STANDBY DATABASE
SQL> select name from v$datafile;
NAME
+STDBY/oradb/datafile/system.256.788911005
+STDBY/oradb/datafile/sysaux.257.788911005
+STDBY/oradb/datafile/undotbs1.258.788911005
+STDBY/oradb/datafile/users.259.788911005
The Actual physical location of standby database files in ASM in standby server is shown below
ASMCMD> pwd
+STDBY/11gdb/DATAFILE
ASMCMD>
ASMCMD> ls
SYSAUX.259.805921967
SYSTEM.258.805921881
UNDOTBS1.260.805922023
USERS.261.805922029
ASMCMD>
ASMCMD> pwd
+STDBY/11gdb/DATAFILE
i even tried to rename the datafiles in standby database but it throws error
ERROR at line 1:
ORA-01511: error in renaming log/data files
ORA-01275: Operation RENAME is not allowed if standby file management is
automatic.
Regards,
007Hi saurabh,
I tried to rename the datafiles in standby database after restoring it throws the below error
ERROR at line 1:
ORA-01511: error in renaming log/data files
ORA-01275: Operation RENAME is not allowed if standby file management is
automatic.
Also in my pfile i have mentioned the below parameters
*.db_create_file_dest='+STDBY'
*.db_domain=''
*.db_file_name_convert='+DATA','+STDBY'
*.db_name='ORADB'
*.db_unique_name='11GDB'
Regards,
007 -
Creating a service program in inventory results in 'no data found error'
Hi,
i tried creating a service program as was given in the sep 2001 issue of the crm newsletter. While saving the item it results in the following error:
FRM-40735 ON-INSERT trigger raised unhandled exception ORA-01653
ORA-01403 No data Found
What could be the possible cause for the occurrence of this error?
Thanks & Regards,
NithyaYou're doing a lot of unnecessary conversion of dates into character strings and back into dates again.
Keep it simple and just use dates as-is:
SQL> select * from T_REC_PER;
VAL START_PERIOD END_PERIOD
5 15-OCT-2008 00:00:00 30-OCT-2008 00:00:00
6 31-OCT-2008 00:00:00 06-NOV-2008 00:00:00
8 07-NOV-2008 00:00:00 12-NOV-2008 00:00:00
SQL> declare
2 ls_per date := to_date('17-OCT-2008','DD-MON-YYYY');
3 ln_val number;
4 begin
5 select t.val
6 into ln_val
7 from T_REC_PER t
8 where ls_per between t.start_period and t.end_period;
9 dbms_output.put_line(ln_val);
10 end;
11 /
5
PL/SQL procedure successfully completed.
SQL> declare
2 ls_per date := to_date('09-NOV-2008','DD-MON-YYYY');
3 ln_val number;
4 begin
5 select t.val
6 into ln_val
7 from T_REC_PER t
8 where ls_per between t.start_period and t.end_period;
9 dbms_output.put_line(ln_val);
10 end;
11 /
8
PL/SQL procedure successfully completed. -
Director 12- can't create a windows projector - problem writing file error
I am trying to create a windows projector of a project. when I get the following error - problem writing file - file name- Can't compress file that has been modified and not saved. The file has not been modified and has been saved. Any help would be great.
Hi.
You say you have tried publishing to a new empty folder. From the video I can see that the folder has table1.app in there (25 secs in) which is a Mac projector that has been published.
Yet you have Windows Projector checkbox ticked at the start of the video.
I publish Mac and Windows projectors to entirely seperate folders as a matter of practice.
I call the folders "Published" and "PublishedMac" and that is where the respective projectors for each piece of software lives.
Perhaps there is some mix up between Mac and Windows publishing that is going on because
you are publishing both to the same folder?
Hope this helps.
Richie -
Creating abap data flow, open file error
hello experts,
i am trying to pull all the field of MARA table in BODS.
so i m using abap data flow.but after executing the job i got error "cant open the .dat file"
i am new to abap data flow so i think may be i did some mistake in configuration of datastore.
can any one guide me how to create a datastore for abap data flow???In your SAP Applications datastore, are you using "Shared Directory" or "FTP" as the "Data transfer method"? Given the error, probably the former. In that case, the account used by the Data Services job server must have access to wherever SAP is putting the .DAT files. When you run an ABAP dataflow, SAP runs the ABAP extraction code (of course) and then exports or saves the results to a .DAT file, which I believe is just a tab-delimited flat text file, in the folder "Working directory on SAP server." This is specified from the perspective of the SAP server, e.g., "E:\BODS\TX," where the E:\BODS\TX folder is local to the SAP application server. I believe this folder is specified as a directive to the ABAP code, telling SAP where to stick it (the .DAT files). The DS job server then picks it up from there, and you tell it how to get there via "Application path to the shared directory," which, in the above case, might be
SAPDEV1\BODS\TX" if you shared-out the E:\BODS folder as "BODS" and the SAP server was SAPDEV1. Anyway: the DS job server needs to be able to read files at
SAPDEV1\BODS\TX, and may not have any rights to do so, especially if it's just logging-in as Local System. That's likely your problem. In a Windows networking environment, I always have the DS job server log-in using an AD account, which then needs to be granted privileges to the, in our example's case,
SAPDEV1\BODS\TX folder. Also comes in handy for getting to data sources, sometimes.
Best wishes,
Jeff Prenevost
Data Services Practice Manager
itelligence -
Extra zeros in numeric fields when I create spreadsheet from data files
I created a form with Live Cycle, and then in Adobe I use the option "create a spread sheet from data files" to import the answers to a xls file.
I have two problems with this action: the first one is that I get 8 extra zeros in each numeric field, the second one is that fields are ordered by type and not by appearance.
I tried a different approach importing directly in Excel, in that way there are no extra zeros and the order is correct, but I get just one form each time, and in a "stairways" fashion (i. e. first field in A1, second in B2, and so on)
I'd appreciate any helpWhere do you find the command "create a spread sheet from data files"?
Anyway i have a similar problem: when i export data in a .csv file and than i import it in excel, i always get 8 extra zeros... how can i solve the problem? -
Select Byte Order when Savin a DAT file
Hi,
I need to save a DAT file selecting "Low -> High" as Byte Order, but nothing works. Doesn't matter what I do It's always stored as "High -> Low"
I tried assigning the value "Low -> High" to the variables FileHdByteOrder & DataSetByteOrder but when saving the values return to "High -> Low" and are ignored.
Is there a way to do it?
Thanks in advance.
Marc.Hi Brad,
Thanks for the clarification.
We generate the DAT files with LabVIEW (any byte order can be selected) and then we use a custom Java application to manage a database results from the DAT files.
Sometimes we need to work on the files using DIAdem but gives us many problems as our Java application only supports Big-Endian DAT files. I think that we will have to implement that feature in our application.
Any way could be nice to select the byte order in DIAdem.
Best regards,
Marc. -
Reading data file(txt) present in a file system from a package deployed in SSISDB
Hi,
I am trying to create and later read a data file from a package deployed in SSISDB, but it is not reading it while it is successfully creating the file. The same package when run from the file system package, runs successfully. Generating ispac and deploying
in SSISDB is running for infinite time. Any idea? Is it a permission issue?
Thanks
AyushHi Ayush,
If I understand correctly, you directly execute a package from the file system source via Execute Package Utility, it works fine. When you right-click on a package under Integration Services Catalog \ SSISDB \ <Folder name> \ Projects \ <Project
name> \ Packages \ <Package name> and select Execute... to run a package in SQL Server Management Studio, it fails.
When we execute a package under Integration Services Catalog, the package will run under the credentials used to connect to SQL Server Management Studio. Please note that we need run the package using Windows Authentication.
As to your issue, please make sure the account connects to SQL Server Management Studio has required permissions to access the file folder and file outside the SSIS package.
If there are any other questions, please feel free to ask.
Thanks,
Katherine Xiong
Katherine Xiong
TechNet Community Support -
Load blob to file. error: file write error
Hi,
I used procedure witch load blob to file. It's working at Oracle 10g, but its does't work on oracle 11g. Why?
create or replace PROCEDURE load_blob_to_bfile (p_file_id IN VARCHAR2, p_directory IN VARCHAR2, p_ident in varchar2 default NULL)
IS
v_blob BLOB;
v_start NUMBER := 1;
v_bytelen NUMBER := 2000;
v_len NUMBER;
v_raw RAW (2000);
v_x NUMBER;
v_output UTL_FILE.file_type;
v_file_name VARCHAR2 (200);
BEGIN
-- get length of blob
SELECT DBMS_LOB.getlength (blob_content), filename
INTO v_len, v_file_name
FROM wwv_flow_files
WHERE filename = p_file_id;
-- define output directory
v_output := UTL_FILE.fopen (p_directory, p_ident||'_'||v_file_name, 'wb', 32760);
-- save blob length
v_x := v_len;
-- select blob into variable
SELECT blob_content
INTO v_blob
FROM wwv_flow_files
WHERE filename = p_file_id;
v_start := 1;
WHILE v_start < v_len AND v_bytelen > 0
LOOP
DBMS_LOB.READ (v_blob, v_bytelen, v_start, v_raw);
UTL_FILE.put_raw (v_output, v_raw);
UTL_FILE.fflush (v_output);
/* Text only could be: UTL_RAW.cast_to_varchar2 (v_raw);*/
-- set the start position for the next cut
v_start := v_start + v_bytelen;
-- set the end position if less than 32000 bytes
v_x := v_x - v_bytelen;
IF v_x < 2000
THEN
v_bytelen := v_x;
END IF;
END LOOP;
UTL_FILE.fclose (v_output);
END;
directories is creaited and granted for read and write. It looks like file is created, but application rise the write file error, and files in directory is emty.
TomasBSo, I'm copying the file in a temp. BLOB with this function:
FUNCTION get_remote_binary_data (p_conn IN OUT NOCOPY UTL_TCP.connection,
p_file IN VARCHAR2)
RETURN BLOB IS
l_conn UTL_TCP.connection;
l_amount PLS_INTEGER;
l_buffer RAW(32767);
l_data BLOB;
BEGIN
DBMS_LOB.createtemporary (lob_loc => l_data,
CACHE => TRUE,
dur => DBMS_LOB.CALL);
l_conn := get_passive(p_conn); //get a passive connection
send_command(p_conn, 'RETR ' || p_file, TRUE); //send Retrieve command to server
BEGIN
LOOP
l_amount := UTL_TCP.read_raw (l_conn, l_buffer, 32767);
DBMS_LOB.writeappend(l_data, l_amount, l_buffer);
END LOOP;
EXCEPTION
WHEN UTL_TCP.END_OF_INPUT THEN
NULL;
WHEN OTHERS THEN
NULL;
END;
UTL_TCP.close_connection(l_conn);
get_reply(p_conn);
RETURN l_data;
END;
Then I'm writing it into a local file:
PROCEDURE put_local_binary_data (p_data IN BLOB,
p_dir IN VARCHAR2,
p_file IN VARCHAR2) IS
l_out_file UTL_FILE.FILE_TYPE;
l_buffer RAW(32767);
l_amount BINARY_INTEGER;
l_pos INTEGER := 1;
l_blob_len INTEGER;
BEGIN
l_blob_len := DBMS_LOB.getlength(p_data);
l_amount := DBMS_LOB.GETCHUNKSIZE(p_data);
IF (l_amount >= 32767) THEN
l_amount := 32767;
END IF;
l_out_file := UTL_FILE.FOPEN(p_dir, p_file, 'w', 32767);
WHILE l_pos < l_blob_len LOOP
DBMS_LOB.READ (p_data, l_amount, l_pos, l_buffer);
UTL_FILE.put_raw(l_out_file, l_buffer, FALSE);
l_pos := l_pos + l_amount;
END LOOP;
UTL_FILE.FCLOSE(l_out_file);
EXCEPTION
WHEN OTHERS THEN
IF UTL_FILE.IS_OPEN(l_out_file) THEN
UTL_FILE.FCLOSE(l_out_file);
END IF;
RAISE;
END;
I've checked the blob before I'we wrote it ito the file (UTL_FILE.put_raw(l_out_file, l_buffer, FALSE)), and it contains no carriage-return. -
JDBC-ODBC Bridge to SPSS data files - Result Set Type is not supported
Hello,
As mentioned in the subject I am trying to read SPSS data files using the SPSS 32-Bit data driver, ODBC and the JDBC-ODBC Bridge.
Using this SPSS Driver I manged to read the data directly into an MS-SQL Server using:
SELECT [...] FROM
OPENROWSET(''MSDASQL.1'',''DRIVER={SPSS 32-BIT Data Driver (*.sav)};DBQ=' SomePathWhereTheFilesAre';SERVER=NotTheServer'', ''SELECT 'SomeSPSSColumn' FROM "'SomeSPSSFileNameWithoutExt'"'') AS a
This works fine!
Using Access and an ODBC System DNS works for IMPORTING but NOT for LINKING.
It is even possible to read the data using the very slow SPSS API.
However, when it comes to JDBC-ODBC the below code does only work in part. The driver is loaded successfully, but when it comes to transferring data into the resultset object the error
SQLState: null
Result Set Type is not supported
Vendor: 0
occurs.
The official answer from SPSS is to use .Net or to use their implementation with Python in their new version 14.0. But this is obviously not an option when you want to use only Java.
Does anybody have experience with SPSS and JDBC-ODBC??? I have tried the possible ResultSet Types, which I took from:
http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/ad/rjvdsprp.htm
and none of them worked.
Thank you in advance for your ideas and input & stay happy!
Here the code without all the rest of the class arround it:
// Module: SimpleSelect.java
// Description: Test program for ODBC API interface. This java application
// will connect to a JDBC driver, issue a select statement
// and display all result columns and rows
// Product: JDBC to ODBC Bridge
// Author: Karl Moss
// Date: February, 1996
// Copyright: 1990-1996 INTERSOLV, Inc.
// This software contains confidential and proprietary
// information of INTERSOLV, Inc.
public static void main1() {
String url = "jdbc:odbc:SomeSystemDNS";
String query = "SELECT SomeSPSSColumn FROM 'SomeSPSSFileName'";
try {
// Load the jdbc-odbc bridge driver
Class.forName ("sun.jdbc.odbc.JdbcOdbcDriver");
DriverManager.setLogStream(System.out);
// Attempt to connect to a driver. Each one
// of the registered drivers will be loaded until
// one is found that can process this URL
Connection con = DriverManager.getConnection (url);
// If we were unable to connect, an exception
// would have been thrown. So, if we get here,
// we are successfully connected to the URL
// Check for, and display and warnings generated
// by the connect.
checkForWarning (con.getWarnings ());
// Get the DatabaseMetaData object and display
// some information about the connection
DatabaseMetaData dma = con.getMetaData ();
System.out.println("\nConnected to " + dma.getURL());
System.out.println("Driver " +
dma.getDriverName());
System.out.println("Version " +
dma.getDriverVersion());
System.out.println("");
// Create a Statement object so we can submit
// SQL statements to the driver
Statement stmt = con.createStatement(ResultSet.TYPE_FORWARD_ONLY ,ResultSet.CONCUR_READ_ONLY);
// Submit a query, creating a ResultSet object
ResultSet rs = stmt.executeQuery (query);
// Display all columns and rows from the result set
dispResultSet (rs);
// Close the result set
rs.close();
// Close the statement
stmt.close();
// Close the connection
con.close();
}Thank you for your reply StuDerby!
Actually the above script was before, as you suggested, leaving the ResultSetTeype default. This did not work...
I am getting gray hair with SPSS - in terms of connectivity and "integratebility" none of their solutions offered is sufficient from my point of view.
Variable definitions can only be read by the slow API, data can only be read by Python or Microsoft Products... and if you want to combine both you are in big trouble. I can only assume that this is a company strategy to sell their Dimensions Platform to companies versus having companies developping their applications according to business needs.
Thanks again for any furthur suggestions and I hope, that some SPSS Developper will see this post!
Cheers!!
Maybe you are looking for
-
The issue is the code is not inserting the good records into the MIE table. This is becasue of the error 'OtherError GFSTM_INS_SNURK_NEW_TABLES_PA.gfstm_ins_asn_journal_pr:ORA-01841: (full) year must be between -4713 and +9999, and not be 0' This err
-
Whats the difference between Rich Client and Desktop Client? (BO XI 3.2)
Hello, maybe someone has a matrix oder something like that? Thanks for helping... Biegel
-
Need a way to handle oracle.jbo.RowInconsistentException
This exception throws (as you know :-) when the another user changes data, and after that the first user tries to commit his changes ,by calling "Commit" operation binding from a pageDef, in a usual way : BindingContainer bindings = getBindin
-
hello experts, i wanna know the procedure of assigning our defined dunning procedure to one time customers... after F150 schedule run, i want to assign to 1time customer... where i can get this customer record.... so, help me out of this problem... t
-
Combined more Data Fields from different segments in single report
Hi all, i want to combined report from more then one segment(like 2 fields from Sales & Distribution & one Field from Inventory) then what should i do so far, should i create cubes in respective segment, either i create that but when i add it into Se