Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

SQL problem

Options
  • 13-06-2006 1:40pm
    #1
    Closed Accounts Posts: 584 ✭✭✭


    Hi all,

    Have a SQL server problem. I have a table of 9000 records. I have just added a column called rec_id to the table of type integer. I want the rec_id to be the primary key. Have any of you any idea how to populate the rec_id field with incrementing numbers?

    E.g rec_id custID
    1 12345
    2 12345
    3 98764
    . .
    . .

    Jeff


Comments

  • Registered Users Posts: 1,193 ✭✭✭liamo


    From Enterprise Manager; when you add a column, set it to be the Primary Key, set the Identity propery to True, the incrementing values will be populated automatically.

    Regards,

    Liam


  • Moderators, Politics Moderators Posts: 39,812 Mod ✭✭✭✭Seth Brundle


    I was looking into something like this last week - can you set the identity property on an already populated table?

    edit: sorry - you can from the EM - not really from script.


  • Registered Users Posts: 15,443 ✭✭✭✭bonkey


    From script it would indeed be a bit messier.

    The following URL gives some good ideas...although its mostly about SELECTing data

    http://support.microsoft.com/default.aspx?scid=KB;EN-US;q186133


    One could always loop through a cursor and do an UPDATE WHERE CURRENT OF.
    Or select the data into a new table, and do a "table swap" in the same style that many changes in the EM can result in.

    But yeah...use the EM unless there's a compelling reason not to.

    jc


  • Moderators, Politics Moderators Posts: 39,812 Mod ✭✭✭✭Seth Brundle


    I was doing it with an online DB (where I have no EM access) last week and it buggered up the whole lot!
    I copied the script generated by the EM on a local copy of the DB and then ran that online but it messed the whole lot up.
    I then had to recreate the tables, copy the data across and then delete the original tables & rename the new ones. PITA!


  • Registered Users Posts: 2,781 ✭✭✭amen


    well before chaning any data I would always
    A: stop anyone from using the database while I was making the change(you would be surprised how many ppl forget to to do this)
    A: bcp out the data
    B: make copy of the table and copy the data into it

    I would probally have taken a sample of the live data and test my script on that first as well


  • Advertisement
  • Moderators, Politics Moderators Posts: 39,812 Mod ✭✭✭✭Seth Brundle


    In my case the DB was not yet available to the public and had been restored online from a local backup but yes your points are valid.


  • Registered Users Posts: 1,456 ✭✭✭FSL


    If you have Enterprise Manager Locally you can add the remote server to your SQL server Group and use your local Enterprise Manager to access the remote database, provided your login has the necessary permissions.


  • Moderators, Politics Moderators Posts: 39,812 Mod ✭✭✭✭Seth Brundle


    The EM isn't an option open to me as its locked down from outside and the DB seems to be available only to the (shared hosting) web server.


  • Registered Users Posts: 1,456 ✭✭✭FSL


    Sorry I thought you said you had EM available on your local machine. If you have been able to upload the database/tables to the web server then presumably you have a valid userid/pwd to access the web server and that user id/pwd is valid for the sql server on the web server. So if you have established a connection to the web server as you would if you were uploading data to it, and you have EM running on your local machine you should be able to register the Sql server in EM on your local machine, provided of course that the Sql server on the web sever is MSSQL. I have clients with MSSQL server and provided I have logged onto their machine I can run EM on my own machine and access those databases on their machine which my login permits.


  • Closed Accounts Posts: 161 ✭✭JuncoPartner


    It is possible to lock usernames and passwords out of enterprise manager fairly easily actually.
    EM and MS just hide all the crap from you but they are just executing sql code behind all their fancy wizards. What EM/MS would do here is
    Build a table that mirrors the table u are editing,
    insert into it by selecting all from the table u are editing
    add a new column to the table u are editing and set it to identity(1, 1) or whatever you are using
    Re insert all the data by selecting all from the temp table
    drop the temp table and bam your done
    its pretty simple
    i would use a regular table too by the way, actual #temptables are very dangerous and cumbersome because they lock down the whole tmp database but you prob already knew that


  • Advertisement
  • Registered Users Posts: 15,443 ✭✭✭✭bonkey


    its pretty simple
    Which is why you didn't mention rebuilding indexes, "moving" foreign key constraints and the other stuff that might also need to be done.

    Oh - and the approach assumes there is enough space available for two copies of the table.
    i would use a regular table too by the way, actual #temptables are very dangerous and cumbersome because they lock down the whole tmp database but you prob already knew that
    No they don't. If they ever did, it was back in the days of 6.5


  • Closed Accounts Posts: 161 ✭✭JuncoPartner


    bonkey wrote:
    Which is why you didn't mention rebuilding indexes, "moving" foreign key constraints and the other stuff that might also need to be done.

    Oh - and the approach assumes there is enough space available for two copies of the table.


    No they don't. If they ever did, it was back in the days of 6.5

    Hmm not to sound like an a$$ or anything but yes they do that is a fact, building and querying a temp table places a lock on the entire tmp database. Also as a side note every transaction against any table places locks on it. But its easy to get around most of them with a nice with(nolock). Doesnt work with tmptables or anything in the tmpdb tho.

    As for your first point mes apologies, but temporarily moving data to one table and back is simple enough yes. Im not talking about blowing up a table here. You merely move to a holder table alter the original and move back. Why would you have to rebuild your indexes? Ok if your db is in a jock in the first place and full of broken keys etc then your in trouble. But the fact is this is exactly how EM/MS does it so either way I guess your boned!

    And yes the approach does assume there is enough space available for two copies of the table. Sorry about that. Tho if you have a space issue emptying one table to another and back again there's surely something wrong. Or is he using some version of sql server that creates 50MB empty tables?

    Anywho have you got anything constructive to say to help solve the mans problem? :P


  • Registered Users Posts: 15,443 ✭✭✭✭bonkey


    Hmm not to sound like an a$$ or anything but yes they do that is a fact, building and querying a temp table places a lock on the entire tmp database.

    Creating a temp table locks tempdb for the duration of the create. This also means if you use SELECT..INTO... then the lock will persist until the table is both created and populated.

    However, if you create the table quickly (either with a CREATE TABLE or with a SELECT... INTO... WHERE 1=0) and then populate seperately there's no problem.
    Also as a side note every transaction against any table places locks on it. But its easy to get around most of them with a nice with(nolock).
    Easy, yes. Wise? Necessary? Desireable?

    Locks and lock management are not simple things. There is no "easy get-around" that should be used without fully understanding whats going on.
    Why would you have to rebuild your indexes?
    Maybe because they don't exist on the new table?

    Or were you suggesting it would be a good idea to create the indexes when you create the table....before you populate it?
    But the fact is this is exactly how EM/MS does it so either way I guess your boned!
    The EM will manage indxes, constraints, foreign-key-constraints etc. all for you. Your description omitted all of these bits.

    I was just pointing out that its not necessarily as simple as you make it out to be.

    Its a bit like saying Its as easy as A, B, C while forgetting to mention that its often not actually that easy, but rather will require steps D through P as well.
    And yes the approach does assume there is enough space available for two copies of the table. Sorry about that. Tho if you have a space issue emptying one table to another and back again there's surely something wrong.
    I work with a system where one of the tables is over 50% of the available disk space. You can't "fix" a table this size through duplication.
    Or is he using some version of sql server that creates 50MB empty tables?

    hallelujah mentioned his table has 9000 rows, without knowing the row-length, that means its impossible to know how much data is in the table. BLOB/CLOB fields mean that even assuming teh row-length max per row isn't a sure bet.

    In kbannon's case, none of that information was mentioned, so its impossible to know how much data is in the table.

    This means in both cases, its impossible to know how much space is needed for the second table that the data will be copied to. We also don't know how much space is in the database and/or whether or not expansion is an option.

    But you're right....shame on me for pointing out the potential causes for problems. We'd all be much better off just assuming things won't be issues.
    Anywho have you got anything constructive to say to help solve the mans problem? :P
    What I said in my first post.


  • Closed Accounts Posts: 161 ✭✭JuncoPartner


    bonkey wrote:
    Creating a temp table locks tempdb for the duration of the create. This also means if you use SELECT..INTO... then the lock will persist until the table is both created and populated.

    Ok, I dont know what you do but you really need to read what you're saying mate. A few posts above your talking about using a cursor when there's no need to use a cursor, in fact altough there is a hell of a lot of cases were cursors are unavoidable, 90% of the time the better solution is a memory table and a while loop.
    So you do or dont agree that using temp tables locks the temp db? First you say it doesnt? Now you say it does?
    bonkey wrote:
    However, if you create the table quickly (either with a CREATE TABLE or with a SELECT... INTO... WHERE 1=0) and then populate seperately there's no problem.
    Yep thats indeed the best way to do it if you must use a temp table. But will it populate itself by magic? While you work with a temp table you lock the temp db. So populating it after the create will still lock your temp db.
    bonkey wrote:
    Easy, yes. Wise? Necessary? Desireable?

    Locks and lock management are not simple things. There is no "easy get-around" that should be used without fully understanding whats going on.
    99.9999% of the time it is perfectly acceptable to use a with(nolock) on a select statement. I should have been more clear I guess.
    bonkey wrote:
    Maybe because they don't exist on the new table?

    Or were you suggesting it would be a good idea to create the indexes when you create the table....before you populate it?


    The EM will manage indxes, constraints, foreign-key-constraints etc. all for you. Your description omitted all of these bits.
    Im not sure I follow here. Are you talking about the tmp table or holder table as it were? Why both with indexes on it? Who cares its only there to temporarily hold the data. If you are merely altering your original table then yes the indexes already exist and the dbms manages them as soon as you make an insert anyway.

    bonkey wrote:
    I work with a system where one of the tables is over 50% of the available disk space. You can't "fix" a table this size through duplication.

    hallelujah mentioned his table has 9000 rows, without knowing the row-length, that means its impossible to know how much data is in the table. BLOB/CLOB fields mean that even assuming teh row-length max per row isn't a sure bet.

    In kbannon's case, none of that information was mentioned, so its impossible to know how much data is in the table.

    This means in both cases, its impossible to know how much space is needed for the second table that the data will be copied to. We also don't know how much space is in the database and/or whether or not expansion is an option.
    Actually Ill give you that one. I posted without thinking there your 100% correct.


  • Registered Users Posts: 15,443 ✭✭✭✭bonkey


    Ok, I dont know what you do but you really need to read what you're saying mate.
    I know what I'm saying.
    A few posts above your talking about using a cursor when there's no need to use a cursor
    I said you could use a cursor. I never said it was the best approach in every situation.
    So you do or dont agree that using temp tables locks the temp db?
    I do not agree.

    Database-level locks are used to prevent databases from being dropped while users are connected to them. This does not apply to either master or tempdb as neither can be dropped in nornal operation.

    The next-most coarse level of locking is table-level, which clearly won't be a problem as other users won't be using your table-level lock.

    The tempdb contention issue from SELECT..INTO... is caused by the locks placed on the system tables while a table is being created. Once the creation is completed, you can happily do an INSERT INTO...SELECT FROM... and the coarsest lock which should be used on tempdb is a table-level lock.
    First you say it doesnt? Now you say it does?
    I did say locking wasn't simple :)
    While you work with a temp table you lock the temp db. So populating it after the create will still lock your temp db.
    What do you mean by "lock your tempdb" ???

    As I pointed out, database-level locks only exist to stop the database being dropped. Nothing more. This cannot and does not happen on non-droppable databases.

    You can block other users from creating objects in tempdb, which may effectively block their queries, but as I pointed out this is only during object-creation (while you have exclusive locks on the system tables)/

    Using the table doesn't stop other people using tempdb, nor from creating tehir own objects etc. So while there will be locks on objects in tempdb, there isn't a database-level lock, nor is there any blocking resultant from "regular" usage such as table-population.
    99.9999% of the time it is perfectly acceptable to use a with(nolock) on a select statement. I should have been more clear I guess.
    You can be as clear as you like, but I'm simply pointing out (again) that locks and locking is not a simple nor trivial subject.

    Blindly using optimiser hints that sacrifice ACID properties because someone on the net told you its almost-always fine to do so is not what I would consider a good practice.

    Yes, there can be benefits to using a hint like nolock. There's also a good reason why its also referred to as a "dirty read".
    Im not sure I follow here. Are you talking about the tmp table or holder table as it were? Why both with indexes on it?
    Sorry - I misread your original description.

    I thought you were doing what you claimed (i.e. "what the EM does") whereas in fact you're using a variant which can lead to even bigger problems. I'll deal with both cases here.

    In the EM-scripted approach, the original table is dropped, and the temp table is renamed to what the original was called. In this case, there's a total table replacement so both tables need indexes.

    In your approach, you copy out, and re-insert. Although you don't mention it, obviously you have to delete before re-inserting (or you'll have duplicates). To do the deletes you'll need to drop/disable and foreign-key constraints which rely on the data being present.

    Once you've dropped and re-inserted the entire set of data....well....if you don't think your indexes are worth rebuilding after that, I guess we just fundamentally disagree on our index-maintenance strategies. Personally,I'd have dropped the indexes before the delete and rebuilt them after the re-insert. Its faster (deletes and inserts on non-indexed structures) and has the advantage that my index (both structure and statistics) is fully optimised and balanced when I'm done.


Advertisement