Usage of ssounlck.sql

Hello there
I have a customer who has the following problem.
"Problem exisits when a user is locked out by entering their password incorrectly more than the number of times specified in the login server configuration."
They have tried to run the ssounlck.sql script but this doe not seem to unlock the user. The only way that they can get this user working again is to maunally unlock the
account, by deleting rows from the wwsso_audit_log_view object.
Please advise me as to whether this is an issue with the ssounlck routine or perhaps we
are using it incorrectly.
Thanks
Richard

Richard,
You may want to search the Oracle9iAS Portal Security and Login Server forum. This forum is for questions relating to the Portal Development Kit.
Thanks,
Sue

Similar Messages

  • CPU Usage by a sql query/insert

    Hi,
    I want to know in which cases, a sql statement uses a lot of CPU usages and what are the wayouts to prevent 90-100% CPU usage.
    Thanks
    Deepak

    Of course it will! If you're doing your Select statement on production, then you will get an accurate representation of what the CPU does on production!
    However, be aware that it is production, and that anything you do could cause performance problems to the rest of the database (we once had a developer who caused massive performance problems in prod, just from running a single select statement, although I forget why it caused the problems now *{:-( )                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Experiences on first usage of Oracle SQL Developer Data Modeling

    Hi @ll,
    working with Quest Toad Data Modeler 2.25 over a year, I'm searching for a replacement with the ability to create ALTER TABLE... statements. Today I downloaded the standalone version and tried to compare my local database against our develoment server.
    Our usage scenario is that the development database can be changed by each developer. We have a trigger activated for monitoring (and logging) database changes. Currently the tested and released changes will be merged into the Toad data model manually by me.
    Using the Toad model, we create DDL scripts for the SAP supported databases: Oracle, MaxDB, MS-SQL and DB2.
    I'd like to facilitate this process.
    h3. Test 1
    1. Import the current model from the development database (User A), save it as XML
    2. Import the current model from my local database (User B), save it as XML
    3. Compare the XML models
    h4. Results
    a) each table is displayed as modified, although no difference is displayed in any column
    b) in the DDL, source tables are renamed with a "bcp_" prefix
    c) for NVARCHAR2, data types are changed (length from 756 to 510, 1800 to 1200)
    h3. Test 2
    1. Import the current model from the development database (User A), save it as XML
    3. Compare the imported model with the XML
    h4. Results 2
    a) The field attribute "Mandatory" changed from "true" to "false"
    h3. Test 3
    1. Import from database
    2. Compare against a XML scheme
    h4. Results 3
    a) Comparing shows the modified tables only - this works nearyl as expected
    b) still different length for datatype NVARCHAR2
    h3. Wish List
    a) the Open File dialog always opens in "My Files", better would be the last opened directory
    b) While starting the comparing, allow changing "source" and "target"
    c) Allow comparing two database schemes
    d) Support for MaxDB?
    Overall, this new tool looks very promising and I'm looking forward to testing the next versions ;-)

    Hi Christian,
    thanks for trying Oracle Data Modeling.
    EAR2 is released and NVarchar2 problem is fixed there. Also are fixed some problems related to your area of interest - applying differences to database.
    On your observations:
    1) each table is displayed as modified, although no difference is displayed in any column - column ordering could be changed, or some of table properties - you can check this by clicking on table node and you can look at details tab for more information
    2) in the DDL, source tables are renamed with a "bcp_" prefix - this is typical rename, create, copy pattern for applying changes that require restructuring of table and intermediate preserving of content. If it's not enough you can try "Advanced DDL" option which gives more control over the whole process - self controlled script with logging, restarting, execution window, error masking. You have option to unload table into file system and load it back after original table is recreated (LOB columns are not supported - we can add support if there is demand for that). Transformation function can be defined for columns with changed data type. There are few words about that in "Data Modeling overview" document (p.15) - http://www.oracle.com/technology/products/database/sql_developer/pdf/sqldeveloperdatamodelingoverview.pdf
    Regards,
    Philip
    Edited by: Philip Stoyanov on Nov 26, 2008 1:40 PM

  • Is this a new usage in PL/SQL?

    Hi friends,
    Today I've tested new features in 11G, and read the Document "Oracle Database 11g: The Top New Features for DBAs and Developers" and when I come to page 226 chapter PL/SQL Performance, I see this code as follows:
    alter session set plsql_warnings = ‘enable:all, disable:06002, disable:06005, disable:06006,
    disable:06010’
    alter session set plsql_ccflags = ‘simple:false’
    create package gcd_test is
    procedure time_it;
    end gcd_test;
    create package body gcd_test is
    $if $$simple $then
    subtype my_integer is simple_integer;
    simple constant times.simple%type := ‘y’;
    $else
    subtype my_integer is pls_integer not null;
    simple constant times.simple%type := ‘n’;
    $end
    Frankly I have never seen that package body can be defined like this...
    the dollar sign with the logic operators, never seen in PL/SQL reference book and then I tested in 11G, it won't cause any error.
    I can't remember this usage in any document...
    so if anybody knows about that, I need your help~
    Edited by: user12977032 on Jul 2, 2010 12:09 AM

    user12977032 wrote:
    I mean the code is like some way to define variables dynamically, or following some rules.Not really. This new feature is a standard feature in most compilers. It allows you to define and set compiler flags and variables and perform conditional compilation.
    For example, you may have a PL/SQL package that is used on Standard Edition (SE) and Enterprise Edition (EE). However, you would like to use a EE feature that is not available on SE (SE for example requires a slower method to be used).
    With conditional compiling you can define a code block that needs to be compiled for EE versions and a different code block that needs to be compiled for SE. Thus you have source code that can be compiled optimally for that server version it is being compiled on.
    This feature has existed for many years in compilers ranging from C to Delphi. And has been sorely missing from the PL/SQL parser and compiler.

  • Basic steps as how to web usage mine in sql server 2005

    Hi there,
    I am doing a project on web usage mining of my universities server logs and im just wondering how i go about mining them in sql server 2005?
    Do i mine them in one table? do i normalise the web log data? what algorithms will i use on them as im trying to get usage patterns from the users and also where most of the users come from.
    Thanks in advance
    Gary

    Hi
    Here’re some thoughts that might help you design your Data Mining for weblogs:
    1.     You can put the data in one or more tables as per the semantics of the data. Data which is an entity by itself should be put in a flat table with one key per entry (user sessions on the web server for example), whereas some data naturally will have a many to one relation with the primary data (page visits per session for example) and can be put in a separate table with primary/foreign key relationship. SQL Server 2005 will model them as case table/nested table for the purpose of mining.
    2.     Normalize: Depends on what your data looks like. If you want to run clustering algorithm and your data has two attributes, A and B and A is 10 times more important than B, you should normalize accordingly. If the actual values of A are in order of thousand and actual values ob B are in order of tens and they are equally important, you should again normalize. However, if you want to use the as-is value without a weight, you do not have to normalize the data.
    3.     Usage Patterns, like a sequence of page visits can be modeled using sequence clustering algorithm. User categorization based on attributes might be a clustering algorithm. If you have more information on what you want to find out, I might be able to suggest more specific choices.
    Hope this helps
    Shuvro

  • OBIEE 11g Usage Tracking - Physical SQL

    Hi All,
    The Query Text column gives me only the logical SQL. The Log Level is set at 2. How can I get the physical SQL via Usage tracking?
    Thanks for your time and help.

    Hi,
    Good for you.
    Maybe you can post the solution and how you solved your issue to help other users having the same problem and then close the thread (currently it's still This question is Not Answered.)

  • Usage of Xopen SQL states and SQL Exception?

    Hi
    Is there way to make full use of SQL Exceptions?
    Has anybody used sql states from sql exception?
    It is specified in the API reference that an sql exception object contains an xopen sql state which is a string. But the states in specs are defined as class, subclass.
    The question is how can i make use of these java strings to interpret what exactly happened at the database? Are they really useful? If they are, any utilities which converts these strings to a meaningful message? Any pointers on these question would also help me.
    Thanx in adv.
    Giridhar

    SQLException has inherited a method getMessage() which seems to be quite useful.
    For situations where you want to check on a specific one of several possible (or expected) states (like: maybe the table is not yet created ...), I think you can quite fine use getSQLState() and also getErorCode(). Try out in tests, which information is returned by which constellation, then you can use it for making decisions in your program logic.
    But be aware, that all these informations probably are DBMS specific!

  • Usage of java.sql.Timestamp with classes12.zip and ojdbc14.jar  ?

    Hi all,
    If i'm using java.sql.Timestamp with classes12 it is functioning perfectly,
    if i'm using ojdbc14 and java.sql.Timestamp it is functioning in different way and failing to do the action..
    Example : update set xxx=yy where time = my Timestamp object set in Prepared statement
    Hope to see the answer

    http://forum.java.sun.com/thread.jspa?threadID=460615&messageID=2116517
    Timestamp insert problem
    Using the "classes12.zip" file that comes with the distribution for Oracle versions 8.1.6.x and 8.1.7.x, Oracle's DATE datatype is mapped to the "java.sql.Timestamp" class. However, the "ojdbc14.jar" driver maps DATE to "java.sql.Date", and "java.sql.Date" only holds a date (without a time), whereas "java.sql.Timestamp" holds both a date and a time.

  • Autonomous Transactions usage in PL/SQL anonymous block coding

    Hi,
    I am trying to incorporate Autonomous Transaction for our work. I am using the tables provided below,
    CREATE TABLE T1
    F1 INTEGER,
    F2 INTEGER
    CREATE TABLE T2
    F1 INTEGER,
    F2 INTEGER
    insert into t1(f1, f2)
    values(20, 0)
    insert into t2(f1, f2)
    values(10, 0)
    Now, when I use the code snippet given below, it is working as expected.
    create or replace procedure p1 as
    PRAGMA AUTONOMOUS_TRANSACTION;
    begin
         update t2
         set f2 = 25
         where f1 = 10;
         commit;
    end;
    declare
    PRAGMA AUTONOMOUS_TRANSACTION;
    a integer;
    begin
         update t1
         set f2 = 15
         where f1 = 20;
         p1();
         rollback;
    end;
    Here, updation in t2 table is commited and t1 is rolled back, it is working as
    expected. I would like to achieve the same functionality through PL/SQL
    anonymous block coding, to do this, I use the following code snippet,
    declare
    PRAGMA AUTONOMOUS_TRANSACTION;
    a integer;
    begin
         update t1
         set f2 = 15
         where f1 = 20;
         begin
              update t2
              set f2 = 35
              where f1 = 10;
              commit;
         end;
         rollback;
    end;
    Here, data in both the tables are commited, how do I change it to work as I
    mentioned above like committing t2 alone, please help, thank you.
    Regards,
    Deva

    Can you explain what you're trying to accomplish from a business perspective? This doesn't look like a particularly appropriate way to use autonomous transactions, so you may be causing yourself problems down the line.
    That said, padders's solution does appear to work for me
    SCOTT @ nx102 Local> CREATE TABLE T1
      2  (
      3  F1 INTEGER,
      4  F2 INTEGER
      5  )
      6  /
    Table created.
    Elapsed: 00:00:01.03
    SCOTT @ nx102 Local>
    SCOTT @ nx102 Local>
    SCOTT @ nx102 Local> CREATE TABLE T2
      2  (
      3  F1 INTEGER,
      4  F2 INTEGER
      5  )
      6  /
    Table created.
    Elapsed: 00:00:00.00
    SCOTT @ nx102 Local>
    SCOTT @ nx102 Local> insert into t1(f1, f2)
      2  values(20, 0)
      3  /
    1 row created.
    Elapsed: 00:00:00.01
    SCOTT @ nx102 Local>
    SCOTT @ nx102 Local> insert into t2(f1, f2)
      2  values(10, 0)
      3  /
    1 row created.
    Elapsed: 00:00:00.01
    SCOTT @ nx102 Local> commit;
    Commit complete.
    Elapsed: 00:00:00.01
    SCOTT @ nx102 Local> DECLARE
      2     a INTEGER;
      3 
      4     PROCEDURE update_t2
      5     IS
      6        PRAGMA AUTONOMOUS_TRANSACTION;
      7     BEGIN
      8        UPDATE t2
      9           SET f2 = 35
    10         WHERE f1 = 10;
    11 
    12        COMMIT;
    13     END update_t2;
    14  BEGIN
    15     UPDATE t1
    16        SET f2 = 15
    17      WHERE f1 = 20;
    18    
    19     update_t2;
    20 
    21     ROLLBACK;
    22  END;
    23  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.04Have you done something else that would cause a deadlock?
    Justin

  • Re: usage of XML SQL Utility

    Hi there,
    I have such a design issue, which I posted a few days ago and I reworded here. Hopefully, I made it clearer this time.
    The working scenario, coded in Java, goes like this:
    1> Given an XML string, I parse it out and get a set of values for a certain set of elements, say in one ROW.
    2> I embed this set of values in the WHERE clause of a query string, do a SELECT.
    3> Based on the result from the above SELECT, I do UPDATE, SELECT and INSERT to a few tables.
    My question:
    Could Oracle XML SQL Utility be used here? If yes, how?
    From my reading so far, the Oracle XSU handles the SQL-XML and XML-SQL Mapping very well, in terms of a whole XML string. But, if at some point, I want to break the XML string and get some business logic mingled in it, say a simple calculation, how can I efficiently deal with it?
    Any suggestion would be greatly appreciated. Thanks.
    ---Denali
    null

    Here are the five choices I see:
    [list]
    [*] XSU111_ver1_2_1.zip -- to be used with JDBC1.x (JDK1.1.x or later) and loadable into Oracle8.1.5 (486 KB)
    [*] XSU12_ver1_2_1.zip -- to be used with JDBC2.0 (JDK1.2.x or later) and loadable into Oracle8.1.6 or later (508 KB)
    [*] XSU111_816_ver2_1_0_beta.zip -- to be used with JDBC1.0 and JDK1.1.8 (486 KB)
    [*] XSU12_816_ver2_1_0_beta.zip -- to be used with JDBC2.0 (JDK1.2.x or later) and loadable into Oracle8.1.6 (486 KB)
    [*] XSU12_ver2_1_0_beta.zip -- to be used with JDBC2.0 (JDK1.2.x or later) and loadable into Oracle8.1.7 or later (508 KB)
    [list]

  • Dynamic PL/SQL block vs dynamic SQL SELECT

    Hi there,
    I have a question regarding the optimal way to code a dynamic SELECT INTO statement. Below are the 2 posiibilities I know of:
    _1. Dynamically executing the SELECT statement and making use of the INTO clause of the EXECUTE IMMEDIATE statement_
    CREATE OR REPLACE FUNCTION get_num_of_employees (p_loc VARCHAR2, p_job VARCHAR2)
    RETURN NUMBER
    IS
    v_query_str VARCHAR2(1000);
    v_num_of_employees NUMBER;
    BEGIN
    v_query_str := 'SELECT COUNT(*) FROM emp_'
    || p_loc
    || ' WHERE job = :bind_job';
    EXECUTE IMMEDIATE v_query_str
    INTO v_num_of_employees
    USING p_job;
    RETURN v_num_of_employees;
    END;
    _2. Encapsulating the SELECT INTO statement in a block and dynamically exectuting the block_
    CREATE OR REPLACE FUNCTION get_num_of_employees (p_loc VARCHAR2, p_job VARCHAR2)
    RETURN NUMBER
    IS
    v_query_str VARCHAR2(1000);
    v_num_of_employees NUMBER;
    BEGIN
    v_query_str := 'begin
    SELECT COUNT(*) INTO :into_bind FROM emp_'
    || p_loc
    || ' WHERE job = :bind_job;
    end;';
    EXECUTE IMMEDIATE v_query_str
    USING out v_num_of_employees, p_job;
    RETURN v_num_of_employees;
    END;
    I was just wondering which way would be preferred? I know the second method uses a bind variable for the INTO clause, but does the first one also use bind varialbes (no semi-colon)? Any differences in terms of efficiency or speed?
    Thanks alot
    Edited by: BYS2 on Oct 19, 2011 1:23 AM

    sybrand_b wrote:
    No difference in terms of performance or speed
    Both variants will wreck the primary purpose of PL/SQL: to avoid parsing.
    When I would see a 'developer' do this, I would fire him on the spot.
    Why abuse PL/SQL in such a fashion? Both statements don't require parsing, as there is nothing dynamic in them and indicate a complete lack of understanding of Oracle, or a desire to deliver completely unscalable applications, resulting in end-users desiring to lynch you, and rightly so.
    Remove the dynamic SQL or find another job.
    Sybrand Bakker
    Senior Oracle DBANot dynamic? What if p_loc and p_job were generated dynamically based on user-input? or what if there were potentially thousands of tables that p_loc could refer to? Should I make a CASE statement with a thousand cases?
    In addition, the first example was actually taken directly from the official Oracle Database Application Developer's Guide (version 10.2). http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14251/adfns_dynamic_sql.htm#i1006429 - look under 'Sample Single-Row Query Using Native Dynamic SQL' heading. Therefore, if you have any issues with this alleged 'improper' usage of dynamic SQL, perhaps you should go talk to Oracle directly.
    While I appreciate your response, I don't think it has occurred that you that not everyone is a 'developer'. In fact, I have only very recently (several days ago) taught myself how to use Oracle SQL, PL/SQL and XMLDB by reading several of the official Oracle language and developer's guides. It is more a passing interest to me as I am working on some medical research which may require the use of a database. I am actually in medical school at the moment but have an undergraduate degree in Electrical and Computer engineering so I am generally well-versed in programming.
    Perhaps the next time, you post your rubbish, rude and unhelpful comments, you should stop and think that people come to this forum because they need help and not because they want to be told to 'find another job'. In fact, I am quite certain that I could make you look absolutely stupid in any topic of electrical engineering or medicine.
    Please do us all a favour and stop polluting this forum with your vapid posts. While I understand that your behavior is likely a compensatory mechanism to cope with your inferiority complex, know that help IS available if you need it.
    Edited by: BYS2 on Oct 19, 2011 2:13 AM

  • Issue with passing schema name as variable to sql file

    Hi,
    I have a scenario wherein from SYS a Java process (Process_1) is invoking SQL files and executing the same in SQLPLUS mode.
    DB: Oracle 11.2.3.0
    Platform: Oracle Linux 5 (64-bit)
    Call_1.sql is being invoked by Java which contains the below content:-
    ALTER SESSION SET CURRENT_SCHEMA=&&1;
    UPDATE <table1> SET <Column1> = &&1;
    COMMIT;
    @Filename_1.sql
    Another process (Process_2) again from SYS user is also accessing Filename_1.sql.
    The content of Filename_1.sql is:-
    DECLARE
    cnt NUMBER := 0;
    BEGIN
      SELECT COUNT(1) INTO cnt FROM all_tables WHERE table_name = 'TEST' AND owner = '&Schema_name';
      IF cnt = 1 THEN
      BEGIN
        EXECUTE IMMEDIATE 'DROP TABLE TEST';
        dbms_output.put_line('Table dropped with success');
      END;
      END IF;
      SELECT COUNT(1) INTO cnt FROM all_tables WHERE table_name = 'TEST' AND owner = '&Schema_name';
      IF cnt = 0 THEN
      BEGIN
        EXECUTE IMMEDIATE 'CREATE TABLE TEST (name VARCHAR2(100) , ID NUMBER)';
        dbms_output.put_line('Table created with success');
      END;
      END IF;
    End;
    Process_2 uses "&Schema_Name" identifier to populate the owner name in Filename_1.sql. But Process_1 needs to use "&&1" to populate the owner name. This is where I am looking a way to modify Call_1.sql file so that it can accommodate both "&&1"  to populate owner name values in Filename_1.sql (with avoiding making any changes to Filename_1.sql).
    Any help would be appreciated.
    Thanks.

    Bad day for good code. Have yet to spot any posted today... Sadly, yours is just another ugly hack.
    The appropriate method for using SQL*Plus substitution variables (in an automated fashion), is as command line parameters. Not as static/global variables defined by some other script ran prior.
    So if a script is, for example, to create a schema, it should look something as follows:
    -- usage: create-schema.sql <schema_name>
    set verify off
    set define on
    create user &1 identified by .. default tablespace .. quota ... ;
    grant ... to &1;
    --eof
    If script 1 wants to call it direct then:
    -- script 1
    @create-schema SCOTT
    If script 2 want to call it using an existing variable:
    -- script 2
    @create-schema &SCHEMA
    Please - when hacking in this fashion, make an attempt to understand why the hack is needed and how it works. (and yes, the majority of SQL*Plus scripts fall into the CLI hack category). There's nothing simple, beautiful, or elegant about SQL*Plus scripts and their mainframe roots.

  • How to get the usage of SSRS reports in project server 2010

    Hi
    Can any body tell me how to get the usage of the SSRS reports in Project Server 2010.
    Thanks
    Geeth If you feel that the answer which i gave you is Helpful please select it as Answer/helpful.

    Hello,
    See the links below on how to get the usage for SSRS reports:
    http://sqlbadboy.wordpress.com/2013/09/12/reporting-services-reports-whos-using-them/
    http://www.mssqltips.com/sqlservertip/1908/analyze-report-execution-and-usage-statistics-in-sql-server-reporting-services/
    http://www.mssqltips.com/sqlservertip/1306/how-to-know-what-reporting-services-reports-are-being-used/
    Paul
    Paul Mather | Twitter |
    http://pwmather.wordpress.com | CPS

  • Profiler in SQL Developer not working

    When I run the profiler in SQL developer , I get the following error.
    "Directory exists; check if /tmp exists on file system, and oracle has permission to write there. "
    This error sounds like an UNIX error but I am running SQL developer on Windows 7 (64) OS. Has anyone seen this or knows how to get around it.
    Thanks so much..

    Hi,
    According to the SQL Developer 3.1.04.72 documentation, the "get" command (among others) is not supported:
    Help|Table of Contents|SQL Developer Concepts and Usage|Using the SQL Worksheet|SQL*Plus Statement Supported and Not Supported...
    so options are limited. If sqlplus is accessible and using it as an "external tool" won't conflict entirely with local policy, these links might interest you:
    Re: sqlplus vs sqldeveloper
    Easy Connect and sqldev.conn issues
    Otherwise you may add a feature request for this on the SQL Developer and see if such an enhancement is a priority for the community.
    Regards,
    Gary
    SQL Developer Team

  • ORA-02393 Exceeded Call Limit on CPU Usage

    I have created a Profile and attached it to a user, in this example:
    Create Profile percall
    Limit
    CPU_PER_CALL 10
    IDLE_TIME 5;
    I have attached it to one user - USER1
    When USER1 runs a SQL Statement -
    SELECT COUNT(*) FROM TABLE1 A WHERE A.EFFDT = (SELECT MAX(B.EFFDT) WHERE B.EMPLID = A.EMPLID AND B.EFFDT <= SYSDATE);
    I get an error (Which I want to receive) ORA-02393 Exceeded Call Limit on CPU Usage.
    The SQL statement shows in the table DBA_COMMON_AUDIT_TRAIL, but shows a success even though the user received an error ORA-02393.
    What I want is a way for a DBA to be able to report on those ORA-02393 errors. I don't see any entries in the Log files, and don't notice any errors in the Oracle Tables.
    I would like to be able to show the user (after a week when they bring up the issue) what the SQL statement was and why it Exceeded the CPU Usage. If the error could place the SQL statement in a table or just display it in an error log with the Statement to verify that THIS is the statement which exceeded the CPU Usage.
    Thank you
    Aaron

    can you modify the procedure in which the SELECT resides.
    If so, trap & log the error.

Maybe you are looking for

  • Interactive quicktime movie from keynote doesn't stream correctly on web

    hello, Any help with this would be extremely appreciated as I've been trying to solve the problem for 3 days straight. I've created an interactive slide show in keynote and now want to stream it on my website. The problem is that the finished product

  • Problem with music

    I have a nokia 6265 and the latest version of PC suite. I have a selection of mp3's on my laptop that I wanted to put on the phone but MM says they are either already converted or the file format is not supported. When I try to convert them within MM

  • Deprecated

    What does it mean when it says Warning [deprication] show() in java.awt.Dialog() has been deprecated?

  • Urgent Help Homogenous System copy

    Hello! We have SAP Portal 6.0 ABAP+JAVA WAS640, in Windows 2003 and Oracle 9.2.x. We are updating our devellopment environment, doing a Homogeneous System copy using the R3load copy method. Creating the ABAP Database Instance in the target system, in

  • SID  could not be resolved

    I have still got problems with my installation of 10G, co-residing with 9i on the same computer. I also installed Oracle Client. Maybe this is what confuses the Listener, I have TNS_NAMES.ora (9i, 10g and Client) in three places. I get different erro